2026-02-20 01:34:02.563433 | Job console starting 2026-02-20 01:34:02.576285 | Updating git repos 2026-02-20 01:34:02.683655 | Cloning repos into workspace 2026-02-20 01:34:02.902985 | Restoring repo states 2026-02-20 01:34:02.930929 | Merging changes 2026-02-20 01:34:02.930962 | Checking out repos 2026-02-20 01:34:03.227247 | Preparing playbooks 2026-02-20 01:34:03.900208 | Running Ansible setup 2026-02-20 01:34:08.346509 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-20 01:34:09.131595 | 2026-02-20 01:34:09.131754 | PLAY [Base pre] 2026-02-20 01:34:09.149046 | 2026-02-20 01:34:09.149173 | TASK [Setup log path fact] 2026-02-20 01:34:09.180078 | orchestrator | ok 2026-02-20 01:34:09.198069 | 2026-02-20 01:34:09.198236 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-20 01:34:09.243547 | orchestrator | ok 2026-02-20 01:34:09.259173 | 2026-02-20 01:34:09.259293 | TASK [emit-job-header : Print job information] 2026-02-20 01:34:09.300948 | # Job Information 2026-02-20 01:34:09.301125 | Ansible Version: 2.16.14 2026-02-20 01:34:09.301160 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-02-20 01:34:09.301193 | Pipeline: periodic-midnight 2026-02-20 01:34:09.301216 | Executor: 521e9411259a 2026-02-20 01:34:09.301237 | Triggered by: https://github.com/osism/testbed 2026-02-20 01:34:09.301258 | Event ID: f5b0518268144e8a904ad0ede99096f2 2026-02-20 01:34:09.308148 | 2026-02-20 01:34:09.308260 | LOOP [emit-job-header : Print node information] 2026-02-20 01:34:09.438754 | orchestrator | ok: 2026-02-20 01:34:09.439062 | orchestrator | # Node Information 2026-02-20 01:34:09.439122 | orchestrator | Inventory Hostname: orchestrator 2026-02-20 01:34:09.439165 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-20 01:34:09.439201 | orchestrator | Username: zuul-testbed03 2026-02-20 01:34:09.439235 | orchestrator | Distro: Debian 12.13 2026-02-20 01:34:09.439274 | orchestrator | Provider: static-testbed 2026-02-20 01:34:09.439308 | orchestrator | Region: 2026-02-20 01:34:09.439344 | orchestrator | Label: testbed-orchestrator 2026-02-20 01:34:09.439377 | orchestrator | Product Name: OpenStack Nova 2026-02-20 01:34:09.439427 | orchestrator | Interface IP: 81.163.193.140 2026-02-20 01:34:09.468539 | 2026-02-20 01:34:09.468785 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-20 01:34:09.961191 | orchestrator -> localhost | changed 2026-02-20 01:34:09.969547 | 2026-02-20 01:34:09.969666 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-20 01:34:11.051927 | orchestrator -> localhost | changed 2026-02-20 01:34:11.076284 | 2026-02-20 01:34:11.076482 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-20 01:34:11.375705 | orchestrator -> localhost | ok 2026-02-20 01:34:11.394284 | 2026-02-20 01:34:11.394502 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-20 01:34:11.429348 | orchestrator | ok 2026-02-20 01:34:11.450333 | orchestrator | included: /var/lib/zuul/builds/b056cae760f048f69f355ee80d6b87d0/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-20 01:34:11.458513 | 2026-02-20 01:34:11.458623 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-20 01:34:13.121198 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-20 01:34:13.121574 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/b056cae760f048f69f355ee80d6b87d0/work/b056cae760f048f69f355ee80d6b87d0_id_rsa 2026-02-20 01:34:13.121646 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/b056cae760f048f69f355ee80d6b87d0/work/b056cae760f048f69f355ee80d6b87d0_id_rsa.pub 2026-02-20 01:34:13.121689 | orchestrator -> localhost | The key fingerprint is: 2026-02-20 01:34:13.121727 | orchestrator -> localhost | SHA256:2eM2Z2DP5UcShc0zd0Rx3gykhQYcQtIzDMyxMtk+Sj0 zuul-build-sshkey 2026-02-20 01:34:13.121762 | orchestrator -> localhost | The key's randomart image is: 2026-02-20 01:34:13.121808 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-20 01:34:13.121843 | orchestrator -> localhost | | o+*o.oo o+B*| 2026-02-20 01:34:13.121876 | orchestrator -> localhost | | ooo=.. oo.BB| 2026-02-20 01:34:13.121907 | orchestrator -> localhost | | + o o .. . O| 2026-02-20 01:34:13.121937 | orchestrator -> localhost | | = o . | 2026-02-20 01:34:13.121966 | orchestrator -> localhost | | . E S = o .| 2026-02-20 01:34:13.122004 | orchestrator -> localhost | | . . o o = o o | 2026-02-20 01:34:13.122036 | orchestrator -> localhost | | . + = . .| 2026-02-20 01:34:13.122067 | orchestrator -> localhost | | . + . | 2026-02-20 01:34:13.122099 | orchestrator -> localhost | | | 2026-02-20 01:34:13.122130 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-20 01:34:13.122209 | orchestrator -> localhost | ok: Runtime: 0:00:01.154788 2026-02-20 01:34:13.132675 | 2026-02-20 01:34:13.132799 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-20 01:34:13.168940 | orchestrator | ok 2026-02-20 01:34:13.182291 | orchestrator | included: /var/lib/zuul/builds/b056cae760f048f69f355ee80d6b87d0/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-20 01:34:13.191943 | 2026-02-20 01:34:13.192046 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-20 01:34:13.225833 | orchestrator | skipping: Conditional result was False 2026-02-20 01:34:13.236283 | 2026-02-20 01:34:13.236417 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-20 01:34:13.851197 | orchestrator | changed 2026-02-20 01:34:13.862579 | 2026-02-20 01:34:13.862716 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-20 01:34:14.176977 | orchestrator | ok 2026-02-20 01:34:14.185770 | 2026-02-20 01:34:14.185915 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-20 01:34:14.683982 | orchestrator | ok 2026-02-20 01:34:14.695030 | 2026-02-20 01:34:14.695174 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-20 01:34:15.161289 | orchestrator | ok 2026-02-20 01:34:15.169794 | 2026-02-20 01:34:15.169934 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-20 01:34:15.205007 | orchestrator | skipping: Conditional result was False 2026-02-20 01:34:15.219491 | 2026-02-20 01:34:15.219676 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-20 01:34:15.702298 | orchestrator -> localhost | changed 2026-02-20 01:34:15.725609 | 2026-02-20 01:34:15.725750 | TASK [add-build-sshkey : Add back temp key] 2026-02-20 01:34:16.106377 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/b056cae760f048f69f355ee80d6b87d0/work/b056cae760f048f69f355ee80d6b87d0_id_rsa (zuul-build-sshkey) 2026-02-20 01:34:16.107035 | orchestrator -> localhost | ok: Runtime: 0:00:00.020668 2026-02-20 01:34:16.121702 | 2026-02-20 01:34:16.121857 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-20 01:34:16.575648 | orchestrator | ok 2026-02-20 01:34:16.583800 | 2026-02-20 01:34:16.583931 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-20 01:34:16.618823 | orchestrator | skipping: Conditional result was False 2026-02-20 01:34:16.678192 | 2026-02-20 01:34:16.678336 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-20 01:34:17.149992 | orchestrator | ok 2026-02-20 01:34:17.163835 | 2026-02-20 01:34:17.163962 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-20 01:34:17.211032 | orchestrator | ok 2026-02-20 01:34:17.221161 | 2026-02-20 01:34:17.221289 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-20 01:34:17.539219 | orchestrator -> localhost | ok 2026-02-20 01:34:17.546905 | 2026-02-20 01:34:17.547022 | TASK [validate-host : Collect information about the host] 2026-02-20 01:34:18.827717 | orchestrator | ok 2026-02-20 01:34:18.845435 | 2026-02-20 01:34:18.845574 | TASK [validate-host : Sanitize hostname] 2026-02-20 01:34:18.920145 | orchestrator | ok 2026-02-20 01:34:18.927790 | 2026-02-20 01:34:18.927925 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-20 01:34:19.506992 | orchestrator -> localhost | changed 2026-02-20 01:34:19.521063 | 2026-02-20 01:34:19.521244 | TASK [validate-host : Collect information about zuul worker] 2026-02-20 01:34:19.983503 | orchestrator | ok 2026-02-20 01:34:19.992313 | 2026-02-20 01:34:19.992539 | TASK [validate-host : Write out all zuul information for each host] 2026-02-20 01:34:20.565201 | orchestrator -> localhost | changed 2026-02-20 01:34:20.584972 | 2026-02-20 01:34:20.585106 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-20 01:34:20.923075 | orchestrator | ok 2026-02-20 01:34:20.932708 | 2026-02-20 01:34:20.932826 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-20 01:34:39.374636 | orchestrator | changed: 2026-02-20 01:34:39.374913 | orchestrator | .d..t...... src/ 2026-02-20 01:34:39.374959 | orchestrator | .d..t...... src/github.com/ 2026-02-20 01:34:39.374991 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-20 01:34:39.375018 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-20 01:34:39.375044 | orchestrator | RedHat.yml 2026-02-20 01:34:39.389688 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-20 01:34:39.389706 | orchestrator | RedHat.yml 2026-02-20 01:34:39.389757 | orchestrator | = 1.53.0"... 2026-02-20 01:34:50.269930 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-20 01:34:50.291611 | orchestrator | - Finding latest version of hashicorp/null... 2026-02-20 01:34:50.815634 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-20 01:34:51.897218 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-20 01:34:51.963811 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-02-20 01:34:52.535707 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-02-20 01:34:52.600784 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-20 01:34:53.312754 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-20 01:34:53.312820 | orchestrator | 2026-02-20 01:34:53.312827 | orchestrator | Providers are signed by their developers. 2026-02-20 01:34:53.312832 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-20 01:34:53.312842 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-20 01:34:53.312870 | orchestrator | 2026-02-20 01:34:53.312876 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-20 01:34:53.312880 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-20 01:34:53.312890 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-20 01:34:53.312901 | orchestrator | you run "tofu init" in the future. 2026-02-20 01:34:53.365856 | orchestrator | 2026-02-20 01:34:53.366140 | orchestrator | OpenTofu has been successfully initialized! 2026-02-20 01:34:53.366221 | orchestrator | 2026-02-20 01:34:53.366240 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-20 01:34:53.366252 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-20 01:34:53.366263 | orchestrator | should now work. 2026-02-20 01:34:53.366274 | orchestrator | 2026-02-20 01:34:53.366285 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-20 01:34:53.366296 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-20 01:34:53.366326 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-20 01:34:53.546158 | orchestrator | Created and switched to workspace "ci"! 2026-02-20 01:34:53.546207 | orchestrator | 2026-02-20 01:34:53.546214 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-20 01:34:53.546219 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-20 01:34:53.546224 | orchestrator | for this configuration. 2026-02-20 01:34:53.692032 | orchestrator | ci.auto.tfvars 2026-02-20 01:34:54.546893 | orchestrator | default_custom.tf 2026-02-20 01:34:56.899112 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-20 01:34:57.458831 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-20 01:34:57.692231 | orchestrator | 2026-02-20 01:34:57.692326 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-20 01:34:57.692352 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-20 01:34:57.692371 | orchestrator | + create 2026-02-20 01:34:57.692388 | orchestrator | <= read (data resources) 2026-02-20 01:34:57.692426 | orchestrator | 2026-02-20 01:34:57.692443 | orchestrator | OpenTofu will perform the following actions: 2026-02-20 01:34:57.692474 | orchestrator | 2026-02-20 01:34:57.692493 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-20 01:34:57.692510 | orchestrator | # (config refers to values not yet known) 2026-02-20 01:34:57.692527 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-20 01:34:57.692544 | orchestrator | + checksum = (known after apply) 2026-02-20 01:34:57.692561 | orchestrator | + created_at = (known after apply) 2026-02-20 01:34:57.692577 | orchestrator | + file = (known after apply) 2026-02-20 01:34:57.692595 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.692643 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.692660 | orchestrator | + min_disk_gb = (known after apply) 2026-02-20 01:34:57.692678 | orchestrator | + min_ram_mb = (known after apply) 2026-02-20 01:34:57.692696 | orchestrator | + most_recent = true 2026-02-20 01:34:57.692713 | orchestrator | + name = (known after apply) 2026-02-20 01:34:57.692729 | orchestrator | + protected = (known after apply) 2026-02-20 01:34:57.692744 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.692766 | orchestrator | + schema = (known after apply) 2026-02-20 01:34:57.692783 | orchestrator | + size_bytes = (known after apply) 2026-02-20 01:34:57.692800 | orchestrator | + tags = (known after apply) 2026-02-20 01:34:57.692816 | orchestrator | + updated_at = (known after apply) 2026-02-20 01:34:57.692833 | orchestrator | } 2026-02-20 01:34:57.692850 | orchestrator | 2026-02-20 01:34:57.692867 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-20 01:34:57.692885 | orchestrator | # (config refers to values not yet known) 2026-02-20 01:34:57.692901 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-20 01:34:57.692918 | orchestrator | + checksum = (known after apply) 2026-02-20 01:34:57.692934 | orchestrator | + created_at = (known after apply) 2026-02-20 01:34:57.692950 | orchestrator | + file = (known after apply) 2026-02-20 01:34:57.692967 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.692984 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.693085 | orchestrator | + min_disk_gb = (known after apply) 2026-02-20 01:34:57.693103 | orchestrator | + min_ram_mb = (known after apply) 2026-02-20 01:34:57.693119 | orchestrator | + most_recent = true 2026-02-20 01:34:57.693136 | orchestrator | + name = (known after apply) 2026-02-20 01:34:57.693152 | orchestrator | + protected = (known after apply) 2026-02-20 01:34:57.693169 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.693185 | orchestrator | + schema = (known after apply) 2026-02-20 01:34:57.693202 | orchestrator | + size_bytes = (known after apply) 2026-02-20 01:34:57.693218 | orchestrator | + tags = (known after apply) 2026-02-20 01:34:57.693233 | orchestrator | + updated_at = (known after apply) 2026-02-20 01:34:57.693250 | orchestrator | } 2026-02-20 01:34:57.693266 | orchestrator | 2026-02-20 01:34:57.693283 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-20 01:34:57.693301 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-20 01:34:57.693317 | orchestrator | + content = (known after apply) 2026-02-20 01:34:57.693334 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-20 01:34:57.693350 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-20 01:34:57.693367 | orchestrator | + content_md5 = (known after apply) 2026-02-20 01:34:57.693384 | orchestrator | + content_sha1 = (known after apply) 2026-02-20 01:34:57.693400 | orchestrator | + content_sha256 = (known after apply) 2026-02-20 01:34:57.693417 | orchestrator | + content_sha512 = (known after apply) 2026-02-20 01:34:57.693433 | orchestrator | + directory_permission = "0777" 2026-02-20 01:34:57.693449 | orchestrator | + file_permission = "0644" 2026-02-20 01:34:57.693466 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-20 01:34:57.693483 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.693500 | orchestrator | } 2026-02-20 01:34:57.693524 | orchestrator | 2026-02-20 01:34:57.693542 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-20 01:34:57.693558 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-20 01:34:57.693575 | orchestrator | + content = (known after apply) 2026-02-20 01:34:57.693592 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-20 01:34:57.693609 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-20 01:34:57.693626 | orchestrator | + content_md5 = (known after apply) 2026-02-20 01:34:57.693642 | orchestrator | + content_sha1 = (known after apply) 2026-02-20 01:34:57.693658 | orchestrator | + content_sha256 = (known after apply) 2026-02-20 01:34:57.693675 | orchestrator | + content_sha512 = (known after apply) 2026-02-20 01:34:57.693691 | orchestrator | + directory_permission = "0777" 2026-02-20 01:34:57.693708 | orchestrator | + file_permission = "0644" 2026-02-20 01:34:57.693737 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-20 01:34:57.693755 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.693771 | orchestrator | } 2026-02-20 01:34:57.693788 | orchestrator | 2026-02-20 01:34:57.693818 | orchestrator | # local_file.inventory will be created 2026-02-20 01:34:57.693834 | orchestrator | + resource "local_file" "inventory" { 2026-02-20 01:34:57.693850 | orchestrator | + content = (known after apply) 2026-02-20 01:34:57.693867 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-20 01:34:57.693884 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-20 01:34:57.693901 | orchestrator | + content_md5 = (known after apply) 2026-02-20 01:34:57.693918 | orchestrator | + content_sha1 = (known after apply) 2026-02-20 01:34:57.693935 | orchestrator | + content_sha256 = (known after apply) 2026-02-20 01:34:57.693951 | orchestrator | + content_sha512 = (known after apply) 2026-02-20 01:34:57.693968 | orchestrator | + directory_permission = "0777" 2026-02-20 01:34:57.693984 | orchestrator | + file_permission = "0644" 2026-02-20 01:34:57.694156 | orchestrator | + filename = "inventory.ci" 2026-02-20 01:34:57.694182 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.694200 | orchestrator | } 2026-02-20 01:34:57.694219 | orchestrator | 2026-02-20 01:34:57.694237 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-20 01:34:57.694256 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-20 01:34:57.694274 | orchestrator | + content = (sensitive value) 2026-02-20 01:34:57.694293 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-20 01:34:57.694312 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-20 01:34:57.694331 | orchestrator | + content_md5 = (known after apply) 2026-02-20 01:34:57.694348 | orchestrator | + content_sha1 = (known after apply) 2026-02-20 01:34:57.694366 | orchestrator | + content_sha256 = (known after apply) 2026-02-20 01:34:57.694386 | orchestrator | + content_sha512 = (known after apply) 2026-02-20 01:34:57.694404 | orchestrator | + directory_permission = "0700" 2026-02-20 01:34:57.694424 | orchestrator | + file_permission = "0600" 2026-02-20 01:34:57.694441 | orchestrator | + filename = ".id_rsa.ci" 2026-02-20 01:34:57.694458 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.694488 | orchestrator | } 2026-02-20 01:34:57.694507 | orchestrator | 2026-02-20 01:34:57.694522 | orchestrator | # null_resource.node_semaphore will be created 2026-02-20 01:34:57.694537 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-20 01:34:57.694552 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.694567 | orchestrator | } 2026-02-20 01:34:57.694581 | orchestrator | 2026-02-20 01:34:57.694597 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-20 01:34:57.694612 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-20 01:34:57.694627 | orchestrator | + attachment = (known after apply) 2026-02-20 01:34:57.694642 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.694656 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.694670 | orchestrator | + image_id = (known after apply) 2026-02-20 01:34:57.694685 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.694700 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-20 01:34:57.694708 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.694716 | orchestrator | + size = 80 2026-02-20 01:34:57.694724 | orchestrator | + volume_retype_policy = "never" 2026-02-20 01:34:57.694732 | orchestrator | + volume_type = "ssd" 2026-02-20 01:34:57.694739 | orchestrator | } 2026-02-20 01:34:57.694747 | orchestrator | 2026-02-20 01:34:57.694755 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-20 01:34:57.694763 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-20 01:34:57.694770 | orchestrator | + attachment = (known after apply) 2026-02-20 01:34:57.694778 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.694786 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.694802 | orchestrator | + image_id = (known after apply) 2026-02-20 01:34:57.694810 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.694818 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-20 01:34:57.694826 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.694833 | orchestrator | + size = 80 2026-02-20 01:34:57.694841 | orchestrator | + volume_retype_policy = "never" 2026-02-20 01:34:57.694849 | orchestrator | + volume_type = "ssd" 2026-02-20 01:34:57.694857 | orchestrator | } 2026-02-20 01:34:57.694865 | orchestrator | 2026-02-20 01:34:57.694873 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-20 01:34:57.694880 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-20 01:34:57.694888 | orchestrator | + attachment = (known after apply) 2026-02-20 01:34:57.694896 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.694904 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.694911 | orchestrator | + image_id = (known after apply) 2026-02-20 01:34:57.694919 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.694927 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-20 01:34:57.694934 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.694942 | orchestrator | + size = 80 2026-02-20 01:34:57.694950 | orchestrator | + volume_retype_policy = "never" 2026-02-20 01:34:57.694958 | orchestrator | + volume_type = "ssd" 2026-02-20 01:34:57.695024 | orchestrator | } 2026-02-20 01:34:57.695034 | orchestrator | 2026-02-20 01:34:57.695041 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-20 01:34:57.695049 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-20 01:34:57.695057 | orchestrator | + attachment = (known after apply) 2026-02-20 01:34:57.695065 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.695082 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.695090 | orchestrator | + image_id = (known after apply) 2026-02-20 01:34:57.695098 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.695106 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-20 01:34:57.695114 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.695121 | orchestrator | + size = 80 2026-02-20 01:34:57.695129 | orchestrator | + volume_retype_policy = "never" 2026-02-20 01:34:57.695137 | orchestrator | + volume_type = "ssd" 2026-02-20 01:34:57.695150 | orchestrator | } 2026-02-20 01:34:57.695163 | orchestrator | 2026-02-20 01:34:57.695176 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-20 01:34:57.695189 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-20 01:34:57.695202 | orchestrator | + attachment = (known after apply) 2026-02-20 01:34:57.695215 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.695229 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.695242 | orchestrator | + image_id = (known after apply) 2026-02-20 01:34:57.695250 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.695266 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-20 01:34:57.695274 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.695282 | orchestrator | + size = 80 2026-02-20 01:34:57.695290 | orchestrator | + volume_retype_policy = "never" 2026-02-20 01:34:57.695298 | orchestrator | + volume_type = "ssd" 2026-02-20 01:34:57.695305 | orchestrator | } 2026-02-20 01:34:57.695313 | orchestrator | 2026-02-20 01:34:57.695321 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-20 01:34:57.695329 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-20 01:34:57.695337 | orchestrator | + attachment = (known after apply) 2026-02-20 01:34:57.695345 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.695353 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.695367 | orchestrator | + image_id = (known after apply) 2026-02-20 01:34:57.695375 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.695383 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-20 01:34:57.695391 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.695398 | orchestrator | + size = 80 2026-02-20 01:34:57.695406 | orchestrator | + volume_retype_policy = "never" 2026-02-20 01:34:57.695414 | orchestrator | + volume_type = "ssd" 2026-02-20 01:34:57.695422 | orchestrator | } 2026-02-20 01:34:57.695429 | orchestrator | 2026-02-20 01:34:57.695437 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-20 01:34:57.695445 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-20 01:34:57.695453 | orchestrator | + attachment = (known after apply) 2026-02-20 01:34:57.695461 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.695469 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.695476 | orchestrator | + image_id = (known after apply) 2026-02-20 01:34:57.695484 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.695492 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-20 01:34:57.695500 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.695507 | orchestrator | + size = 80 2026-02-20 01:34:57.695515 | orchestrator | + volume_retype_policy = "never" 2026-02-20 01:34:57.695523 | orchestrator | + volume_type = "ssd" 2026-02-20 01:34:57.695531 | orchestrator | } 2026-02-20 01:34:57.695538 | orchestrator | 2026-02-20 01:34:57.695546 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-20 01:34:57.695554 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-20 01:34:57.695562 | orchestrator | + attachment = (known after apply) 2026-02-20 01:34:57.695570 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.695578 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.695585 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.695593 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-20 01:34:57.695601 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.695609 | orchestrator | + size = 20 2026-02-20 01:34:57.695617 | orchestrator | + volume_retype_policy = "never" 2026-02-20 01:34:57.695625 | orchestrator | + volume_type = "ssd" 2026-02-20 01:34:57.695633 | orchestrator | } 2026-02-20 01:34:57.695641 | orchestrator | 2026-02-20 01:34:57.695649 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-20 01:34:57.695656 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-20 01:34:57.695664 | orchestrator | + attachment = (known after apply) 2026-02-20 01:34:57.695672 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.695680 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.695687 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.695695 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-20 01:34:57.695703 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.695711 | orchestrator | + size = 20 2026-02-20 01:34:57.695719 | orchestrator | + volume_retype_policy = "never" 2026-02-20 01:34:57.695726 | orchestrator | + volume_type = "ssd" 2026-02-20 01:34:57.695734 | orchestrator | } 2026-02-20 01:34:57.695742 | orchestrator | 2026-02-20 01:34:57.695750 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-20 01:34:57.695758 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-20 01:34:57.695765 | orchestrator | + attachment = (known after apply) 2026-02-20 01:34:57.695773 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.695781 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.695788 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.695796 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-20 01:34:57.695804 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.695817 | orchestrator | + size = 20 2026-02-20 01:34:57.695825 | orchestrator | + volume_retype_policy = "never" 2026-02-20 01:34:57.695832 | orchestrator | + volume_type = "ssd" 2026-02-20 01:34:57.695840 | orchestrator | } 2026-02-20 01:34:57.695848 | orchestrator | 2026-02-20 01:34:57.695855 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-20 01:34:57.695863 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-20 01:34:57.695871 | orchestrator | + attachment = (known after apply) 2026-02-20 01:34:57.695879 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.695891 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.695900 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.695907 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-20 01:34:57.695915 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.695923 | orchestrator | + size = 20 2026-02-20 01:34:57.695931 | orchestrator | + volume_retype_policy = "never" 2026-02-20 01:34:57.695939 | orchestrator | + volume_type = "ssd" 2026-02-20 01:34:57.695946 | orchestrator | } 2026-02-20 01:34:57.695954 | orchestrator | 2026-02-20 01:34:57.695962 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-20 01:34:57.695970 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-20 01:34:57.695978 | orchestrator | + attachment = (known after apply) 2026-02-20 01:34:57.695985 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.696034 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.696043 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.696050 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-20 01:34:57.696058 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.696073 | orchestrator | + size = 20 2026-02-20 01:34:57.696081 | orchestrator | + volume_retype_policy = "never" 2026-02-20 01:34:57.696089 | orchestrator | + volume_type = "ssd" 2026-02-20 01:34:57.696097 | orchestrator | } 2026-02-20 01:34:57.696104 | orchestrator | 2026-02-20 01:34:57.696112 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-20 01:34:57.696120 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-20 01:34:57.696128 | orchestrator | + attachment = (known after apply) 2026-02-20 01:34:57.696136 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.696144 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.696151 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.696159 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-20 01:34:57.696167 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.696175 | orchestrator | + size = 20 2026-02-20 01:34:57.696182 | orchestrator | + volume_retype_policy = "never" 2026-02-20 01:34:57.696190 | orchestrator | + volume_type = "ssd" 2026-02-20 01:34:57.696198 | orchestrator | } 2026-02-20 01:34:57.696206 | orchestrator | 2026-02-20 01:34:57.696214 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-20 01:34:57.696226 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-20 01:34:57.696239 | orchestrator | + attachment = (known after apply) 2026-02-20 01:34:57.696247 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.696261 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.696271 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.696285 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-20 01:34:57.696295 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.696303 | orchestrator | + size = 20 2026-02-20 01:34:57.696310 | orchestrator | + volume_retype_policy = "never" 2026-02-20 01:34:57.696318 | orchestrator | + volume_type = "ssd" 2026-02-20 01:34:57.696330 | orchestrator | } 2026-02-20 01:34:57.696342 | orchestrator | 2026-02-20 01:34:57.696350 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-20 01:34:57.696362 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-20 01:34:57.696379 | orchestrator | + attachment = (known after apply) 2026-02-20 01:34:57.696387 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.696395 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.696404 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.696418 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-20 01:34:57.696427 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.696438 | orchestrator | + size = 20 2026-02-20 01:34:57.696452 | orchestrator | + volume_retype_policy = "never" 2026-02-20 01:34:57.696466 | orchestrator | + volume_type = "ssd" 2026-02-20 01:34:57.696479 | orchestrator | } 2026-02-20 01:34:57.696492 | orchestrator | 2026-02-20 01:34:57.696505 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-20 01:34:57.696519 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-20 01:34:57.696531 | orchestrator | + attachment = (known after apply) 2026-02-20 01:34:57.696544 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.696558 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.696571 | orchestrator | + metadata = (known after apply) 2026-02-20 01:34:57.696586 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-20 01:34:57.696600 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.696615 | orchestrator | + size = 20 2026-02-20 01:34:57.696623 | orchestrator | + volume_retype_policy = "never" 2026-02-20 01:34:57.696631 | orchestrator | + volume_type = "ssd" 2026-02-20 01:34:57.696644 | orchestrator | } 2026-02-20 01:34:57.696658 | orchestrator | 2026-02-20 01:34:57.696671 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-20 01:34:57.696684 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-20 01:34:57.696698 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-20 01:34:57.696712 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-20 01:34:57.696726 | orchestrator | + all_metadata = (known after apply) 2026-02-20 01:34:57.696739 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.696753 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.696767 | orchestrator | + config_drive = true 2026-02-20 01:34:57.696781 | orchestrator | + created = (known after apply) 2026-02-20 01:34:57.696795 | orchestrator | + flavor_id = (known after apply) 2026-02-20 01:34:57.696809 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-20 01:34:57.696824 | orchestrator | + force_delete = false 2026-02-20 01:34:57.696838 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-20 01:34:57.696851 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.696864 | orchestrator | + image_id = (known after apply) 2026-02-20 01:34:57.696877 | orchestrator | + image_name = (known after apply) 2026-02-20 01:34:57.696892 | orchestrator | + key_pair = "testbed" 2026-02-20 01:34:57.696905 | orchestrator | + name = "testbed-manager" 2026-02-20 01:34:57.696919 | orchestrator | + power_state = "active" 2026-02-20 01:34:57.696932 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.696946 | orchestrator | + security_groups = (known after apply) 2026-02-20 01:34:57.696959 | orchestrator | + stop_before_destroy = false 2026-02-20 01:34:57.696981 | orchestrator | + updated = (known after apply) 2026-02-20 01:34:57.697014 | orchestrator | + user_data = (sensitive value) 2026-02-20 01:34:57.697029 | orchestrator | 2026-02-20 01:34:57.697045 | orchestrator | + block_device { 2026-02-20 01:34:57.697059 | orchestrator | + boot_index = 0 2026-02-20 01:34:57.697073 | orchestrator | + delete_on_termination = false 2026-02-20 01:34:57.697094 | orchestrator | + destination_type = "volume" 2026-02-20 01:34:57.697109 | orchestrator | + multiattach = false 2026-02-20 01:34:57.697124 | orchestrator | + source_type = "volume" 2026-02-20 01:34:57.697138 | orchestrator | + uuid = (known after apply) 2026-02-20 01:34:57.697168 | orchestrator | } 2026-02-20 01:34:57.697182 | orchestrator | 2026-02-20 01:34:57.697197 | orchestrator | + network { 2026-02-20 01:34:57.697211 | orchestrator | + access_network = false 2026-02-20 01:34:57.697225 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-20 01:34:57.697239 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-20 01:34:57.697253 | orchestrator | + mac = (known after apply) 2026-02-20 01:34:57.697268 | orchestrator | + name = (known after apply) 2026-02-20 01:34:57.697283 | orchestrator | + port = (known after apply) 2026-02-20 01:34:57.697298 | orchestrator | + uuid = (known after apply) 2026-02-20 01:34:57.697312 | orchestrator | } 2026-02-20 01:34:57.697326 | orchestrator | } 2026-02-20 01:34:57.697340 | orchestrator | 2026-02-20 01:34:57.697355 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-20 01:34:57.697370 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-20 01:34:57.697385 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-20 01:34:57.697399 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-20 01:34:57.697413 | orchestrator | + all_metadata = (known after apply) 2026-02-20 01:34:57.697427 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.697441 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.697456 | orchestrator | + config_drive = true 2026-02-20 01:34:57.697470 | orchestrator | + created = (known after apply) 2026-02-20 01:34:57.697484 | orchestrator | + flavor_id = (known after apply) 2026-02-20 01:34:57.697499 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-20 01:34:57.697513 | orchestrator | + force_delete = false 2026-02-20 01:34:57.697528 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-20 01:34:57.697542 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.697557 | orchestrator | + image_id = (known after apply) 2026-02-20 01:34:57.697570 | orchestrator | + image_name = (known after apply) 2026-02-20 01:34:57.697585 | orchestrator | + key_pair = "testbed" 2026-02-20 01:34:57.697600 | orchestrator | + name = "testbed-node-0" 2026-02-20 01:34:57.697614 | orchestrator | + power_state = "active" 2026-02-20 01:34:57.697629 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.697643 | orchestrator | + security_groups = (known after apply) 2026-02-20 01:34:57.697658 | orchestrator | + stop_before_destroy = false 2026-02-20 01:34:57.697673 | orchestrator | + updated = (known after apply) 2026-02-20 01:34:57.697687 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-20 01:34:57.697702 | orchestrator | 2026-02-20 01:34:57.697716 | orchestrator | + block_device { 2026-02-20 01:34:57.697729 | orchestrator | + boot_index = 0 2026-02-20 01:34:57.697744 | orchestrator | + delete_on_termination = false 2026-02-20 01:34:57.697758 | orchestrator | + destination_type = "volume" 2026-02-20 01:34:57.697773 | orchestrator | + multiattach = false 2026-02-20 01:34:57.697787 | orchestrator | + source_type = "volume" 2026-02-20 01:34:57.697801 | orchestrator | + uuid = (known after apply) 2026-02-20 01:34:57.697816 | orchestrator | } 2026-02-20 01:34:57.697829 | orchestrator | 2026-02-20 01:34:57.697844 | orchestrator | + network { 2026-02-20 01:34:57.697860 | orchestrator | + access_network = false 2026-02-20 01:34:57.697875 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-20 01:34:57.697889 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-20 01:34:57.697904 | orchestrator | + mac = (known after apply) 2026-02-20 01:34:57.697919 | orchestrator | + name = (known after apply) 2026-02-20 01:34:57.697935 | orchestrator | + port = (known after apply) 2026-02-20 01:34:57.697949 | orchestrator | + uuid = (known after apply) 2026-02-20 01:34:57.697965 | orchestrator | } 2026-02-20 01:34:57.697980 | orchestrator | } 2026-02-20 01:34:57.698008 | orchestrator | 2026-02-20 01:34:57.698064 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-20 01:34:57.698073 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-20 01:34:57.698081 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-20 01:34:57.698095 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-20 01:34:57.698103 | orchestrator | + all_metadata = (known after apply) 2026-02-20 01:34:57.698111 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.698119 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.698126 | orchestrator | + config_drive = true 2026-02-20 01:34:57.698134 | orchestrator | + created = (known after apply) 2026-02-20 01:34:57.698142 | orchestrator | + flavor_id = (known after apply) 2026-02-20 01:34:57.698149 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-20 01:34:57.698157 | orchestrator | + force_delete = false 2026-02-20 01:34:57.698165 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-20 01:34:57.698172 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.698180 | orchestrator | + image_id = (known after apply) 2026-02-20 01:34:57.698188 | orchestrator | + image_name = (known after apply) 2026-02-20 01:34:57.698195 | orchestrator | + key_pair = "testbed" 2026-02-20 01:34:57.698203 | orchestrator | + name = "testbed-node-1" 2026-02-20 01:34:57.698211 | orchestrator | + power_state = "active" 2026-02-20 01:34:57.698219 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.698226 | orchestrator | + security_groups = (known after apply) 2026-02-20 01:34:57.698234 | orchestrator | + stop_before_destroy = false 2026-02-20 01:34:57.698242 | orchestrator | + updated = (known after apply) 2026-02-20 01:34:57.698249 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-20 01:34:57.698257 | orchestrator | 2026-02-20 01:34:57.698265 | orchestrator | + block_device { 2026-02-20 01:34:57.698272 | orchestrator | + boot_index = 0 2026-02-20 01:34:57.698280 | orchestrator | + delete_on_termination = false 2026-02-20 01:34:57.698288 | orchestrator | + destination_type = "volume" 2026-02-20 01:34:57.698295 | orchestrator | + multiattach = false 2026-02-20 01:34:57.698303 | orchestrator | + source_type = "volume" 2026-02-20 01:34:57.698311 | orchestrator | + uuid = (known after apply) 2026-02-20 01:34:57.698318 | orchestrator | } 2026-02-20 01:34:57.698326 | orchestrator | 2026-02-20 01:34:57.698334 | orchestrator | + network { 2026-02-20 01:34:57.698348 | orchestrator | + access_network = false 2026-02-20 01:34:57.698356 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-20 01:34:57.698367 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-20 01:34:57.698381 | orchestrator | + mac = (known after apply) 2026-02-20 01:34:57.698395 | orchestrator | + name = (known after apply) 2026-02-20 01:34:57.698408 | orchestrator | + port = (known after apply) 2026-02-20 01:34:57.698422 | orchestrator | + uuid = (known after apply) 2026-02-20 01:34:57.698436 | orchestrator | } 2026-02-20 01:34:57.698447 | orchestrator | } 2026-02-20 01:34:57.698455 | orchestrator | 2026-02-20 01:34:57.698463 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-20 01:34:57.698477 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-20 01:34:57.698490 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-20 01:34:57.698503 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-20 01:34:57.698518 | orchestrator | + all_metadata = (known after apply) 2026-02-20 01:34:57.698532 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.698554 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.698562 | orchestrator | + config_drive = true 2026-02-20 01:34:57.698570 | orchestrator | + created = (known after apply) 2026-02-20 01:34:57.698583 | orchestrator | + flavor_id = (known after apply) 2026-02-20 01:34:57.698597 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-20 01:34:57.698610 | orchestrator | + force_delete = false 2026-02-20 01:34:57.698624 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-20 01:34:57.698638 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.698651 | orchestrator | + image_id = (known after apply) 2026-02-20 01:34:57.698672 | orchestrator | + image_name = (known after apply) 2026-02-20 01:34:57.698681 | orchestrator | + key_pair = "testbed" 2026-02-20 01:34:57.698695 | orchestrator | + name = "testbed-node-2" 2026-02-20 01:34:57.698708 | orchestrator | + power_state = "active" 2026-02-20 01:34:57.698722 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.698735 | orchestrator | + security_groups = (known after apply) 2026-02-20 01:34:57.698749 | orchestrator | + stop_before_destroy = false 2026-02-20 01:34:57.698762 | orchestrator | + updated = (known after apply) 2026-02-20 01:34:57.698775 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-20 01:34:57.698789 | orchestrator | 2026-02-20 01:34:57.698803 | orchestrator | + block_device { 2026-02-20 01:34:57.698817 | orchestrator | + boot_index = 0 2026-02-20 01:34:57.698830 | orchestrator | + delete_on_termination = false 2026-02-20 01:34:57.698844 | orchestrator | + destination_type = "volume" 2026-02-20 01:34:57.698858 | orchestrator | + multiattach = false 2026-02-20 01:34:57.698870 | orchestrator | + source_type = "volume" 2026-02-20 01:34:57.698884 | orchestrator | + uuid = (known after apply) 2026-02-20 01:34:57.698896 | orchestrator | } 2026-02-20 01:34:57.698910 | orchestrator | 2026-02-20 01:34:57.698923 | orchestrator | + network { 2026-02-20 01:34:57.698937 | orchestrator | + access_network = false 2026-02-20 01:34:57.698951 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-20 01:34:57.698964 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-20 01:34:57.698977 | orchestrator | + mac = (known after apply) 2026-02-20 01:34:57.699016 | orchestrator | + name = (known after apply) 2026-02-20 01:34:57.699031 | orchestrator | + port = (known after apply) 2026-02-20 01:34:57.699045 | orchestrator | + uuid = (known after apply) 2026-02-20 01:34:57.699059 | orchestrator | } 2026-02-20 01:34:57.699072 | orchestrator | } 2026-02-20 01:34:57.699085 | orchestrator | 2026-02-20 01:34:57.699099 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-20 01:34:57.699112 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-20 01:34:57.699127 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-20 01:34:57.699141 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-20 01:34:57.699154 | orchestrator | + all_metadata = (known after apply) 2026-02-20 01:34:57.699168 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.699181 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.699194 | orchestrator | + config_drive = true 2026-02-20 01:34:57.699209 | orchestrator | + created = (known after apply) 2026-02-20 01:34:57.699222 | orchestrator | + flavor_id = (known after apply) 2026-02-20 01:34:57.699236 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-20 01:34:57.699249 | orchestrator | + force_delete = false 2026-02-20 01:34:57.699263 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-20 01:34:57.699276 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.699290 | orchestrator | + image_id = (known after apply) 2026-02-20 01:34:57.699304 | orchestrator | + image_name = (known after apply) 2026-02-20 01:34:57.699317 | orchestrator | + key_pair = "testbed" 2026-02-20 01:34:57.699330 | orchestrator | + name = "testbed-node-3" 2026-02-20 01:34:57.699344 | orchestrator | + power_state = "active" 2026-02-20 01:34:57.699357 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.699371 | orchestrator | + security_groups = (known after apply) 2026-02-20 01:34:57.699384 | orchestrator | + stop_before_destroy = false 2026-02-20 01:34:57.699398 | orchestrator | + updated = (known after apply) 2026-02-20 01:34:57.699412 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-20 01:34:57.699426 | orchestrator | 2026-02-20 01:34:57.699439 | orchestrator | + block_device { 2026-02-20 01:34:57.699459 | orchestrator | + boot_index = 0 2026-02-20 01:34:57.699473 | orchestrator | + delete_on_termination = false 2026-02-20 01:34:57.699487 | orchestrator | + destination_type = "volume" 2026-02-20 01:34:57.699508 | orchestrator | + multiattach = false 2026-02-20 01:34:57.699521 | orchestrator | + source_type = "volume" 2026-02-20 01:34:57.699534 | orchestrator | + uuid = (known after apply) 2026-02-20 01:34:57.699548 | orchestrator | } 2026-02-20 01:34:57.699562 | orchestrator | 2026-02-20 01:34:57.699576 | orchestrator | + network { 2026-02-20 01:34:57.699589 | orchestrator | + access_network = false 2026-02-20 01:34:57.699602 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-20 01:34:57.699615 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-20 01:34:57.699629 | orchestrator | + mac = (known after apply) 2026-02-20 01:34:57.699642 | orchestrator | + name = (known after apply) 2026-02-20 01:34:57.699656 | orchestrator | + port = (known after apply) 2026-02-20 01:34:57.699670 | orchestrator | + uuid = (known after apply) 2026-02-20 01:34:57.699683 | orchestrator | } 2026-02-20 01:34:57.699696 | orchestrator | } 2026-02-20 01:34:57.699710 | orchestrator | 2026-02-20 01:34:57.699724 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-20 01:34:57.699744 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-20 01:34:57.699759 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-20 01:34:57.699772 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-20 01:34:57.699785 | orchestrator | + all_metadata = (known after apply) 2026-02-20 01:34:57.699799 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.699812 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.699825 | orchestrator | + config_drive = true 2026-02-20 01:34:57.699839 | orchestrator | + created = (known after apply) 2026-02-20 01:34:57.699853 | orchestrator | + flavor_id = (known after apply) 2026-02-20 01:34:57.699866 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-20 01:34:57.699879 | orchestrator | + force_delete = false 2026-02-20 01:34:57.699892 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-20 01:34:57.699905 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.699919 | orchestrator | + image_id = (known after apply) 2026-02-20 01:34:57.699933 | orchestrator | + image_name = (known after apply) 2026-02-20 01:34:57.699946 | orchestrator | + key_pair = "testbed" 2026-02-20 01:34:57.699959 | orchestrator | + name = "testbed-node-4" 2026-02-20 01:34:57.699972 | orchestrator | + power_state = "active" 2026-02-20 01:34:57.699985 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.700043 | orchestrator | + security_groups = (known after apply) 2026-02-20 01:34:57.700057 | orchestrator | + stop_before_destroy = false 2026-02-20 01:34:57.700071 | orchestrator | + updated = (known after apply) 2026-02-20 01:34:57.700084 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-20 01:34:57.700097 | orchestrator | 2026-02-20 01:34:57.700110 | orchestrator | + block_device { 2026-02-20 01:34:57.700124 | orchestrator | + boot_index = 0 2026-02-20 01:34:57.700138 | orchestrator | + delete_on_termination = false 2026-02-20 01:34:57.700147 | orchestrator | + destination_type = "volume" 2026-02-20 01:34:57.700154 | orchestrator | + multiattach = false 2026-02-20 01:34:57.700162 | orchestrator | + source_type = "volume" 2026-02-20 01:34:57.700170 | orchestrator | + uuid = (known after apply) 2026-02-20 01:34:57.700178 | orchestrator | } 2026-02-20 01:34:57.700185 | orchestrator | 2026-02-20 01:34:57.700193 | orchestrator | + network { 2026-02-20 01:34:57.700201 | orchestrator | + access_network = false 2026-02-20 01:34:57.700209 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-20 01:34:57.700216 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-20 01:34:57.700224 | orchestrator | + mac = (known after apply) 2026-02-20 01:34:57.700232 | orchestrator | + name = (known after apply) 2026-02-20 01:34:57.700239 | orchestrator | + port = (known after apply) 2026-02-20 01:34:57.700247 | orchestrator | + uuid = (known after apply) 2026-02-20 01:34:57.700255 | orchestrator | } 2026-02-20 01:34:57.700263 | orchestrator | } 2026-02-20 01:34:57.700277 | orchestrator | 2026-02-20 01:34:57.700285 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-20 01:34:57.700293 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-20 01:34:57.700300 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-20 01:34:57.700308 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-20 01:34:57.700316 | orchestrator | + all_metadata = (known after apply) 2026-02-20 01:34:57.700324 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.700331 | orchestrator | + availability_zone = "nova" 2026-02-20 01:34:57.700339 | orchestrator | + config_drive = true 2026-02-20 01:34:57.700347 | orchestrator | + created = (known after apply) 2026-02-20 01:34:57.700355 | orchestrator | + flavor_id = (known after apply) 2026-02-20 01:34:57.700362 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-20 01:34:57.700371 | orchestrator | + force_delete = false 2026-02-20 01:34:57.700392 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-20 01:34:57.700406 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.700419 | orchestrator | + image_id = (known after apply) 2026-02-20 01:34:57.700432 | orchestrator | + image_name = (known after apply) 2026-02-20 01:34:57.700447 | orchestrator | + key_pair = "testbed" 2026-02-20 01:34:57.700461 | orchestrator | + name = "testbed-node-5" 2026-02-20 01:34:57.700472 | orchestrator | + power_state = "active" 2026-02-20 01:34:57.700479 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.700485 | orchestrator | + security_groups = (known after apply) 2026-02-20 01:34:57.700492 | orchestrator | + stop_before_destroy = false 2026-02-20 01:34:57.700504 | orchestrator | + updated = (known after apply) 2026-02-20 01:34:57.700515 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-20 01:34:57.700527 | orchestrator | 2026-02-20 01:34:57.700538 | orchestrator | + block_device { 2026-02-20 01:34:57.700550 | orchestrator | + boot_index = 0 2026-02-20 01:34:57.700562 | orchestrator | + delete_on_termination = false 2026-02-20 01:34:57.700574 | orchestrator | + destination_type = "volume" 2026-02-20 01:34:57.700581 | orchestrator | + multiattach = false 2026-02-20 01:34:57.700588 | orchestrator | + source_type = "volume" 2026-02-20 01:34:57.700594 | orchestrator | + uuid = (known after apply) 2026-02-20 01:34:57.700606 | orchestrator | } 2026-02-20 01:34:57.700617 | orchestrator | 2026-02-20 01:34:57.700628 | orchestrator | + network { 2026-02-20 01:34:57.700639 | orchestrator | + access_network = false 2026-02-20 01:34:57.700652 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-20 01:34:57.700663 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-20 01:34:57.700675 | orchestrator | + mac = (known after apply) 2026-02-20 01:34:57.700685 | orchestrator | + name = (known after apply) 2026-02-20 01:34:57.700691 | orchestrator | + port = (known after apply) 2026-02-20 01:34:57.700699 | orchestrator | + uuid = (known after apply) 2026-02-20 01:34:57.700711 | orchestrator | } 2026-02-20 01:34:57.700723 | orchestrator | } 2026-02-20 01:34:57.700734 | orchestrator | 2026-02-20 01:34:57.700745 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-20 01:34:57.700757 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-20 01:34:57.700768 | orchestrator | + fingerprint = (known after apply) 2026-02-20 01:34:57.700779 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.700790 | orchestrator | + name = "testbed" 2026-02-20 01:34:57.700803 | orchestrator | + private_key = (sensitive value) 2026-02-20 01:34:57.700815 | orchestrator | + public_key = (known after apply) 2026-02-20 01:34:57.700826 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.700837 | orchestrator | + user_id = (known after apply) 2026-02-20 01:34:57.700849 | orchestrator | } 2026-02-20 01:34:57.700861 | orchestrator | 2026-02-20 01:34:57.700872 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-20 01:34:57.700883 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-20 01:34:57.700908 | orchestrator | + device = (known after apply) 2026-02-20 01:34:57.700920 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.700932 | orchestrator | + instance_id = (known after apply) 2026-02-20 01:34:57.700943 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.700955 | orchestrator | + volume_id = (known after apply) 2026-02-20 01:34:57.700966 | orchestrator | } 2026-02-20 01:34:57.700978 | orchestrator | 2026-02-20 01:34:57.700989 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-20 01:34:57.701018 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-20 01:34:57.701030 | orchestrator | + device = (known after apply) 2026-02-20 01:34:57.701041 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.701051 | orchestrator | + instance_id = (known after apply) 2026-02-20 01:34:57.701063 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.701074 | orchestrator | + volume_id = (known after apply) 2026-02-20 01:34:57.701085 | orchestrator | } 2026-02-20 01:34:57.701097 | orchestrator | 2026-02-20 01:34:57.701108 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-20 01:34:57.701120 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-20 01:34:57.701131 | orchestrator | + device = (known after apply) 2026-02-20 01:34:57.701143 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.701155 | orchestrator | + instance_id = (known after apply) 2026-02-20 01:34:57.701166 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.701178 | orchestrator | + volume_id = (known after apply) 2026-02-20 01:34:57.701189 | orchestrator | } 2026-02-20 01:34:57.701201 | orchestrator | 2026-02-20 01:34:57.701212 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-20 01:34:57.701224 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-20 01:34:57.701234 | orchestrator | + device = (known after apply) 2026-02-20 01:34:57.701244 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.701254 | orchestrator | + instance_id = (known after apply) 2026-02-20 01:34:57.701263 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.701273 | orchestrator | + volume_id = (known after apply) 2026-02-20 01:34:57.701284 | orchestrator | } 2026-02-20 01:34:57.701296 | orchestrator | 2026-02-20 01:34:57.701307 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-20 01:34:57.701319 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-20 01:34:57.701330 | orchestrator | + device = (known after apply) 2026-02-20 01:34:57.701342 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.701352 | orchestrator | + instance_id = (known after apply) 2026-02-20 01:34:57.701370 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.701382 | orchestrator | + volume_id = (known after apply) 2026-02-20 01:34:57.701394 | orchestrator | } 2026-02-20 01:34:57.701405 | orchestrator | 2026-02-20 01:34:57.701417 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-20 01:34:57.701429 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-20 01:34:57.701441 | orchestrator | + device = (known after apply) 2026-02-20 01:34:57.701452 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.701463 | orchestrator | + instance_id = (known after apply) 2026-02-20 01:34:57.701474 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.701486 | orchestrator | + volume_id = (known after apply) 2026-02-20 01:34:57.701497 | orchestrator | } 2026-02-20 01:34:57.701509 | orchestrator | 2026-02-20 01:34:57.701520 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-20 01:34:57.701532 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-20 01:34:57.701544 | orchestrator | + device = (known after apply) 2026-02-20 01:34:57.701555 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.701566 | orchestrator | + instance_id = (known after apply) 2026-02-20 01:34:57.701577 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.701597 | orchestrator | + volume_id = (known after apply) 2026-02-20 01:34:57.701608 | orchestrator | } 2026-02-20 01:34:57.701620 | orchestrator | 2026-02-20 01:34:57.701631 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-20 01:34:57.701642 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-20 01:34:57.701654 | orchestrator | + device = (known after apply) 2026-02-20 01:34:57.701665 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.701677 | orchestrator | + instance_id = (known after apply) 2026-02-20 01:34:57.701688 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.701699 | orchestrator | + volume_id = (known after apply) 2026-02-20 01:34:57.701711 | orchestrator | } 2026-02-20 01:34:57.701722 | orchestrator | 2026-02-20 01:34:57.701734 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-20 01:34:57.701744 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-20 01:34:57.701756 | orchestrator | + device = (known after apply) 2026-02-20 01:34:57.701767 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.701778 | orchestrator | + instance_id = (known after apply) 2026-02-20 01:34:57.701790 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.701802 | orchestrator | + volume_id = (known after apply) 2026-02-20 01:34:57.701813 | orchestrator | } 2026-02-20 01:34:57.701825 | orchestrator | 2026-02-20 01:34:57.701836 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-20 01:34:57.701848 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-20 01:34:57.701859 | orchestrator | + fixed_ip = (known after apply) 2026-02-20 01:34:57.701871 | orchestrator | + floating_ip = (known after apply) 2026-02-20 01:34:57.701883 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.701894 | orchestrator | + port_id = (known after apply) 2026-02-20 01:34:57.701905 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.701916 | orchestrator | } 2026-02-20 01:34:57.701928 | orchestrator | 2026-02-20 01:34:57.701939 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-20 01:34:57.701950 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-20 01:34:57.701962 | orchestrator | + address = (known after apply) 2026-02-20 01:34:57.701974 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.701986 | orchestrator | + dns_domain = (known after apply) 2026-02-20 01:34:57.702118 | orchestrator | + dns_name = (known after apply) 2026-02-20 01:34:57.702132 | orchestrator | + fixed_ip = (known after apply) 2026-02-20 01:34:57.702163 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.702175 | orchestrator | + pool = "public" 2026-02-20 01:34:57.702186 | orchestrator | + port_id = (known after apply) 2026-02-20 01:34:57.702205 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.702216 | orchestrator | + subnet_id = (known after apply) 2026-02-20 01:34:57.702228 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.702240 | orchestrator | } 2026-02-20 01:34:57.702251 | orchestrator | 2026-02-20 01:34:57.702263 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-20 01:34:57.702274 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-20 01:34:57.702285 | orchestrator | + admin_state_up = (known after apply) 2026-02-20 01:34:57.702296 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.702307 | orchestrator | + availability_zone_hints = [ 2026-02-20 01:34:57.702319 | orchestrator | + "nova", 2026-02-20 01:34:57.702331 | orchestrator | ] 2026-02-20 01:34:57.702343 | orchestrator | + dns_domain = (known after apply) 2026-02-20 01:34:57.702354 | orchestrator | + external = (known after apply) 2026-02-20 01:34:57.702365 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.702376 | orchestrator | + mtu = (known after apply) 2026-02-20 01:34:57.702387 | orchestrator | + name = "net-testbed-management" 2026-02-20 01:34:57.702398 | orchestrator | + port_security_enabled = (known after apply) 2026-02-20 01:34:57.702418 | orchestrator | + qos_policy_id = (known after apply) 2026-02-20 01:34:57.702430 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.702441 | orchestrator | + shared = (known after apply) 2026-02-20 01:34:57.702453 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.702464 | orchestrator | + transparent_vlan = (known after apply) 2026-02-20 01:34:57.702475 | orchestrator | 2026-02-20 01:34:57.702486 | orchestrator | + segments (known after apply) 2026-02-20 01:34:57.702497 | orchestrator | } 2026-02-20 01:34:57.702509 | orchestrator | 2026-02-20 01:34:57.702521 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-20 01:34:57.702532 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-20 01:34:57.702543 | orchestrator | + admin_state_up = (known after apply) 2026-02-20 01:34:57.702553 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-20 01:34:57.702562 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-20 01:34:57.702579 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.702591 | orchestrator | + device_id = (known after apply) 2026-02-20 01:34:57.702602 | orchestrator | + device_owner = (known after apply) 2026-02-20 01:34:57.702611 | orchestrator | + dns_assignment = (known after apply) 2026-02-20 01:34:57.702621 | orchestrator | + dns_name = (known after apply) 2026-02-20 01:34:57.702631 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.702641 | orchestrator | + mac_address = (known after apply) 2026-02-20 01:34:57.702652 | orchestrator | + network_id = (known after apply) 2026-02-20 01:34:57.702661 | orchestrator | + port_security_enabled = (known after apply) 2026-02-20 01:34:57.702671 | orchestrator | + qos_policy_id = (known after apply) 2026-02-20 01:34:57.702680 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.702690 | orchestrator | + security_group_ids = (known after apply) 2026-02-20 01:34:57.702701 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.702711 | orchestrator | 2026-02-20 01:34:57.702721 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.702731 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-20 01:34:57.702741 | orchestrator | } 2026-02-20 01:34:57.702751 | orchestrator | 2026-02-20 01:34:57.702762 | orchestrator | + binding (known after apply) 2026-02-20 01:34:57.702775 | orchestrator | 2026-02-20 01:34:57.702786 | orchestrator | + fixed_ip { 2026-02-20 01:34:57.702798 | orchestrator | + ip_address = "192.168.16.5" 2026-02-20 01:34:57.702809 | orchestrator | + subnet_id = (known after apply) 2026-02-20 01:34:57.702820 | orchestrator | } 2026-02-20 01:34:57.702832 | orchestrator | } 2026-02-20 01:34:57.702843 | orchestrator | 2026-02-20 01:34:57.702854 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-20 01:34:57.702866 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-20 01:34:57.702878 | orchestrator | + admin_state_up = (known after apply) 2026-02-20 01:34:57.702889 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-20 01:34:57.702901 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-20 01:34:57.702912 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.702923 | orchestrator | + device_id = (known after apply) 2026-02-20 01:34:57.702934 | orchestrator | + device_owner = (known after apply) 2026-02-20 01:34:57.702945 | orchestrator | + dns_assignment = (known after apply) 2026-02-20 01:34:57.702956 | orchestrator | + dns_name = (known after apply) 2026-02-20 01:34:57.702968 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.702979 | orchestrator | + mac_address = (known after apply) 2026-02-20 01:34:57.703043 | orchestrator | + network_id = (known after apply) 2026-02-20 01:34:57.703057 | orchestrator | + port_security_enabled = (known after apply) 2026-02-20 01:34:57.703069 | orchestrator | + qos_policy_id = (known after apply) 2026-02-20 01:34:57.703081 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.703102 | orchestrator | + security_group_ids = (known after apply) 2026-02-20 01:34:57.703112 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.703122 | orchestrator | 2026-02-20 01:34:57.703132 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.703143 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-20 01:34:57.703153 | orchestrator | } 2026-02-20 01:34:57.703164 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.703173 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-20 01:34:57.703183 | orchestrator | } 2026-02-20 01:34:57.703194 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.703206 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-20 01:34:57.703217 | orchestrator | } 2026-02-20 01:34:57.703227 | orchestrator | 2026-02-20 01:34:57.703238 | orchestrator | + binding (known after apply) 2026-02-20 01:34:57.703248 | orchestrator | 2026-02-20 01:34:57.703258 | orchestrator | + fixed_ip { 2026-02-20 01:34:57.703269 | orchestrator | + ip_address = "192.168.16.10" 2026-02-20 01:34:57.703280 | orchestrator | + subnet_id = (known after apply) 2026-02-20 01:34:57.703291 | orchestrator | } 2026-02-20 01:34:57.703302 | orchestrator | } 2026-02-20 01:34:57.703313 | orchestrator | 2026-02-20 01:34:57.703324 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-20 01:34:57.703333 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-20 01:34:57.703344 | orchestrator | + admin_state_up = (known after apply) 2026-02-20 01:34:57.703354 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-20 01:34:57.703365 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-20 01:34:57.703375 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.703400 | orchestrator | + device_id = (known after apply) 2026-02-20 01:34:57.703412 | orchestrator | + device_owner = (known after apply) 2026-02-20 01:34:57.703423 | orchestrator | + dns_assignment = (known after apply) 2026-02-20 01:34:57.703434 | orchestrator | + dns_name = (known after apply) 2026-02-20 01:34:57.703444 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.703454 | orchestrator | + mac_address = (known after apply) 2026-02-20 01:34:57.703465 | orchestrator | + network_id = (known after apply) 2026-02-20 01:34:57.703475 | orchestrator | + port_security_enabled = (known after apply) 2026-02-20 01:34:57.703485 | orchestrator | + qos_policy_id = (known after apply) 2026-02-20 01:34:57.703496 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.703507 | orchestrator | + security_group_ids = (known after apply) 2026-02-20 01:34:57.703517 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.703528 | orchestrator | 2026-02-20 01:34:57.703538 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.703622 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-20 01:34:57.703635 | orchestrator | } 2026-02-20 01:34:57.703646 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.703656 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-20 01:34:57.703667 | orchestrator | } 2026-02-20 01:34:57.703678 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.703689 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-20 01:34:57.703700 | orchestrator | } 2026-02-20 01:34:57.703710 | orchestrator | 2026-02-20 01:34:57.703721 | orchestrator | + binding (known after apply) 2026-02-20 01:34:57.703731 | orchestrator | 2026-02-20 01:34:57.703741 | orchestrator | + fixed_ip { 2026-02-20 01:34:57.703751 | orchestrator | + ip_address = "192.168.16.11" 2026-02-20 01:34:57.703762 | orchestrator | + subnet_id = (known after apply) 2026-02-20 01:34:57.703773 | orchestrator | } 2026-02-20 01:34:57.703784 | orchestrator | } 2026-02-20 01:34:57.703794 | orchestrator | 2026-02-20 01:34:57.703805 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-20 01:34:57.703815 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-20 01:34:57.703826 | orchestrator | + admin_state_up = (known after apply) 2026-02-20 01:34:57.703837 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-20 01:34:57.703847 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-20 01:34:57.703858 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.703876 | orchestrator | + device_id = (known after apply) 2026-02-20 01:34:57.703887 | orchestrator | + device_owner = (known after apply) 2026-02-20 01:34:57.703897 | orchestrator | + dns_assignment = (known after apply) 2026-02-20 01:34:57.703908 | orchestrator | + dns_name = (known after apply) 2026-02-20 01:34:57.703929 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.703940 | orchestrator | + mac_address = (known after apply) 2026-02-20 01:34:57.703951 | orchestrator | + network_id = (known after apply) 2026-02-20 01:34:57.703962 | orchestrator | + port_security_enabled = (known after apply) 2026-02-20 01:34:57.703972 | orchestrator | + qos_policy_id = (known after apply) 2026-02-20 01:34:57.703982 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.704009 | orchestrator | + security_group_ids = (known after apply) 2026-02-20 01:34:57.704020 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.704030 | orchestrator | 2026-02-20 01:34:57.704041 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.704052 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-20 01:34:57.704062 | orchestrator | } 2026-02-20 01:34:57.704073 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.704083 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-20 01:34:57.704093 | orchestrator | } 2026-02-20 01:34:57.704104 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.704114 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-20 01:34:57.704125 | orchestrator | } 2026-02-20 01:34:57.704135 | orchestrator | 2026-02-20 01:34:57.704146 | orchestrator | + binding (known after apply) 2026-02-20 01:34:57.704156 | orchestrator | 2026-02-20 01:34:57.704167 | orchestrator | + fixed_ip { 2026-02-20 01:34:57.704177 | orchestrator | + ip_address = "192.168.16.12" 2026-02-20 01:34:57.704187 | orchestrator | + subnet_id = (known after apply) 2026-02-20 01:34:57.704198 | orchestrator | } 2026-02-20 01:34:57.704209 | orchestrator | } 2026-02-20 01:34:57.704219 | orchestrator | 2026-02-20 01:34:57.704229 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-20 01:34:57.704241 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-20 01:34:57.704251 | orchestrator | + admin_state_up = (known after apply) 2026-02-20 01:34:57.704262 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-20 01:34:57.704272 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-20 01:34:57.704282 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.704292 | orchestrator | + device_id = (known after apply) 2026-02-20 01:34:57.704303 | orchestrator | + device_owner = (known after apply) 2026-02-20 01:34:57.704313 | orchestrator | + dns_assignment = (known after apply) 2026-02-20 01:34:57.704324 | orchestrator | + dns_name = (known after apply) 2026-02-20 01:34:57.704335 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.704345 | orchestrator | + mac_address = (known after apply) 2026-02-20 01:34:57.704355 | orchestrator | + network_id = (known after apply) 2026-02-20 01:34:57.704366 | orchestrator | + port_security_enabled = (known after apply) 2026-02-20 01:34:57.704376 | orchestrator | + qos_policy_id = (known after apply) 2026-02-20 01:34:57.704386 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.704397 | orchestrator | + security_group_ids = (known after apply) 2026-02-20 01:34:57.704407 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.704418 | orchestrator | 2026-02-20 01:34:57.704429 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.704440 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-20 01:34:57.704450 | orchestrator | } 2026-02-20 01:34:57.704460 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.704471 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-20 01:34:57.704482 | orchestrator | } 2026-02-20 01:34:57.704492 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.704503 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-20 01:34:57.704514 | orchestrator | } 2026-02-20 01:34:57.704524 | orchestrator | 2026-02-20 01:34:57.704541 | orchestrator | + binding (known after apply) 2026-02-20 01:34:57.704552 | orchestrator | 2026-02-20 01:34:57.704562 | orchestrator | + fixed_ip { 2026-02-20 01:34:57.704572 | orchestrator | + ip_address = "192.168.16.13" 2026-02-20 01:34:57.704583 | orchestrator | + subnet_id = (known after apply) 2026-02-20 01:34:57.704593 | orchestrator | } 2026-02-20 01:34:57.704605 | orchestrator | } 2026-02-20 01:34:57.704616 | orchestrator | 2026-02-20 01:34:57.704626 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-20 01:34:57.704636 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-20 01:34:57.704647 | orchestrator | + admin_state_up = (known after apply) 2026-02-20 01:34:57.704657 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-20 01:34:57.704675 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-20 01:34:57.704687 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.704698 | orchestrator | + device_id = (known after apply) 2026-02-20 01:34:57.704709 | orchestrator | + device_owner = (known after apply) 2026-02-20 01:34:57.704719 | orchestrator | + dns_assignment = (known after apply) 2026-02-20 01:34:57.704729 | orchestrator | + dns_name = (known after apply) 2026-02-20 01:34:57.704740 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.704751 | orchestrator | + mac_address = (known after apply) 2026-02-20 01:34:57.704761 | orchestrator | + network_id = (known after apply) 2026-02-20 01:34:57.704771 | orchestrator | + port_security_enabled = (known after apply) 2026-02-20 01:34:57.704782 | orchestrator | + qos_policy_id = (known after apply) 2026-02-20 01:34:57.704794 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.704804 | orchestrator | + security_group_ids = (known after apply) 2026-02-20 01:34:57.704811 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.704819 | orchestrator | 2026-02-20 01:34:57.704825 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.704831 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-20 01:34:57.704838 | orchestrator | } 2026-02-20 01:34:57.704844 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.704850 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-20 01:34:57.704856 | orchestrator | } 2026-02-20 01:34:57.704862 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.704868 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-20 01:34:57.704874 | orchestrator | } 2026-02-20 01:34:57.704881 | orchestrator | 2026-02-20 01:34:57.704892 | orchestrator | + binding (known after apply) 2026-02-20 01:34:57.704902 | orchestrator | 2026-02-20 01:34:57.704913 | orchestrator | + fixed_ip { 2026-02-20 01:34:57.704923 | orchestrator | + ip_address = "192.168.16.14" 2026-02-20 01:34:57.704934 | orchestrator | + subnet_id = (known after apply) 2026-02-20 01:34:57.704945 | orchestrator | } 2026-02-20 01:34:57.704955 | orchestrator | } 2026-02-20 01:34:57.704965 | orchestrator | 2026-02-20 01:34:57.704976 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-20 01:34:57.704986 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-20 01:34:57.705013 | orchestrator | + admin_state_up = (known after apply) 2026-02-20 01:34:57.705024 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-20 01:34:57.705034 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-20 01:34:57.705044 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.705055 | orchestrator | + device_id = (known after apply) 2026-02-20 01:34:57.705065 | orchestrator | + device_owner = (known after apply) 2026-02-20 01:34:57.705076 | orchestrator | + dns_assignment = (known after apply) 2026-02-20 01:34:57.705086 | orchestrator | + dns_name = (known after apply) 2026-02-20 01:34:57.705097 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.705107 | orchestrator | + mac_address = (known after apply) 2026-02-20 01:34:57.705117 | orchestrator | + network_id = (known after apply) 2026-02-20 01:34:57.705127 | orchestrator | + port_security_enabled = (known after apply) 2026-02-20 01:34:57.705138 | orchestrator | + qos_policy_id = (known after apply) 2026-02-20 01:34:57.705156 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.705167 | orchestrator | + security_group_ids = (known after apply) 2026-02-20 01:34:57.705178 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.705188 | orchestrator | 2026-02-20 01:34:57.705198 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.705209 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-20 01:34:57.705219 | orchestrator | } 2026-02-20 01:34:57.705229 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.705239 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-20 01:34:57.705250 | orchestrator | } 2026-02-20 01:34:57.705260 | orchestrator | + allowed_address_pairs { 2026-02-20 01:34:57.705271 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-20 01:34:57.705282 | orchestrator | } 2026-02-20 01:34:57.705292 | orchestrator | 2026-02-20 01:34:57.705308 | orchestrator | + binding (known after apply) 2026-02-20 01:34:57.705318 | orchestrator | 2026-02-20 01:34:57.705329 | orchestrator | + fixed_ip { 2026-02-20 01:34:57.705338 | orchestrator | + ip_address = "192.168.16.15" 2026-02-20 01:34:57.705347 | orchestrator | + subnet_id = (known after apply) 2026-02-20 01:34:57.705357 | orchestrator | } 2026-02-20 01:34:57.705366 | orchestrator | } 2026-02-20 01:34:57.705375 | orchestrator | 2026-02-20 01:34:57.705384 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-20 01:34:57.705393 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-20 01:34:57.705403 | orchestrator | + force_destroy = false 2026-02-20 01:34:57.705412 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.705420 | orchestrator | + port_id = (known after apply) 2026-02-20 01:34:57.705429 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.705438 | orchestrator | + router_id = (known after apply) 2026-02-20 01:34:57.705447 | orchestrator | + subnet_id = (known after apply) 2026-02-20 01:34:57.705457 | orchestrator | } 2026-02-20 01:34:57.705466 | orchestrator | 2026-02-20 01:34:57.705476 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-20 01:34:57.705484 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-20 01:34:57.705493 | orchestrator | + admin_state_up = (known after apply) 2026-02-20 01:34:57.705503 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.705511 | orchestrator | + availability_zone_hints = [ 2026-02-20 01:34:57.705521 | orchestrator | + "nova", 2026-02-20 01:34:57.705530 | orchestrator | ] 2026-02-20 01:34:57.705539 | orchestrator | + distributed = (known after apply) 2026-02-20 01:34:57.705549 | orchestrator | + enable_snat = (known after apply) 2026-02-20 01:34:57.705558 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-20 01:34:57.705567 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-20 01:34:57.705576 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.705585 | orchestrator | + name = "testbed" 2026-02-20 01:34:57.705594 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.705603 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.705612 | orchestrator | 2026-02-20 01:34:57.705621 | orchestrator | + external_fixed_ip (known after apply) 2026-02-20 01:34:57.705630 | orchestrator | } 2026-02-20 01:34:57.705640 | orchestrator | 2026-02-20 01:34:57.705649 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-20 01:34:57.705659 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-20 01:34:57.705668 | orchestrator | + description = "ssh" 2026-02-20 01:34:57.705677 | orchestrator | + direction = "ingress" 2026-02-20 01:34:57.705687 | orchestrator | + ethertype = "IPv4" 2026-02-20 01:34:57.705701 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.705710 | orchestrator | + port_range_max = 22 2026-02-20 01:34:57.705719 | orchestrator | + port_range_min = 22 2026-02-20 01:34:57.705729 | orchestrator | + protocol = "tcp" 2026-02-20 01:34:57.705739 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.705754 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-20 01:34:57.705763 | orchestrator | + remote_group_id = (known after apply) 2026-02-20 01:34:57.705772 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-20 01:34:57.705781 | orchestrator | + security_group_id = (known after apply) 2026-02-20 01:34:57.705790 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.705799 | orchestrator | } 2026-02-20 01:34:57.705808 | orchestrator | 2026-02-20 01:34:57.705817 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-20 01:34:57.705826 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-20 01:34:57.705835 | orchestrator | + description = "wireguard" 2026-02-20 01:34:57.705844 | orchestrator | + direction = "ingress" 2026-02-20 01:34:57.705853 | orchestrator | + ethertype = "IPv4" 2026-02-20 01:34:57.705862 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.705871 | orchestrator | + port_range_max = 51820 2026-02-20 01:34:57.705880 | orchestrator | + port_range_min = 51820 2026-02-20 01:34:57.705889 | orchestrator | + protocol = "udp" 2026-02-20 01:34:57.705899 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.705907 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-20 01:34:57.705913 | orchestrator | + remote_group_id = (known after apply) 2026-02-20 01:34:57.705918 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-20 01:34:57.705924 | orchestrator | + security_group_id = (known after apply) 2026-02-20 01:34:57.705929 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.705934 | orchestrator | } 2026-02-20 01:34:57.705940 | orchestrator | 2026-02-20 01:34:57.705945 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-20 01:34:57.705950 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-20 01:34:57.705956 | orchestrator | + direction = "ingress" 2026-02-20 01:34:57.705961 | orchestrator | + ethertype = "IPv4" 2026-02-20 01:34:57.705966 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.705972 | orchestrator | + protocol = "tcp" 2026-02-20 01:34:57.705977 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.705983 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-20 01:34:57.705988 | orchestrator | + remote_group_id = (known after apply) 2026-02-20 01:34:57.706045 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-20 01:34:57.706051 | orchestrator | + security_group_id = (known after apply) 2026-02-20 01:34:57.706056 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.706062 | orchestrator | } 2026-02-20 01:34:57.706067 | orchestrator | 2026-02-20 01:34:57.706073 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-20 01:34:57.706078 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-20 01:34:57.706083 | orchestrator | + direction = "ingress" 2026-02-20 01:34:57.706089 | orchestrator | + ethertype = "IPv4" 2026-02-20 01:34:57.706099 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.706108 | orchestrator | + protocol = "udp" 2026-02-20 01:34:57.706118 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.706128 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-20 01:34:57.706139 | orchestrator | + remote_group_id = (known after apply) 2026-02-20 01:34:57.706149 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-20 01:34:57.706159 | orchestrator | + security_group_id = (known after apply) 2026-02-20 01:34:57.706165 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.706170 | orchestrator | } 2026-02-20 01:34:57.706176 | orchestrator | 2026-02-20 01:34:57.706181 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-20 01:34:57.706191 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-20 01:34:57.706197 | orchestrator | + direction = "ingress" 2026-02-20 01:34:57.706207 | orchestrator | + ethertype = "IPv4" 2026-02-20 01:34:57.706216 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.706225 | orchestrator | + protocol = "icmp" 2026-02-20 01:34:57.706234 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.706243 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-20 01:34:57.706253 | orchestrator | + remote_group_id = (known after apply) 2026-02-20 01:34:57.706263 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-20 01:34:57.706271 | orchestrator | + security_group_id = (known after apply) 2026-02-20 01:34:57.706277 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.706282 | orchestrator | } 2026-02-20 01:34:57.706288 | orchestrator | 2026-02-20 01:34:57.706293 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-20 01:34:57.706299 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-20 01:34:57.706309 | orchestrator | + direction = "ingress" 2026-02-20 01:34:57.706318 | orchestrator | + ethertype = "IPv4" 2026-02-20 01:34:57.706327 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.706336 | orchestrator | + protocol = "tcp" 2026-02-20 01:34:57.706346 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.706355 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-20 01:34:57.706369 | orchestrator | + remote_group_id = (known after apply) 2026-02-20 01:34:57.706379 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-20 01:34:57.706385 | orchestrator | + security_group_id = (known after apply) 2026-02-20 01:34:57.706390 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.706396 | orchestrator | } 2026-02-20 01:34:57.706403 | orchestrator | 2026-02-20 01:34:57.706413 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-20 01:34:57.706427 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-20 01:34:57.706437 | orchestrator | + direction = "ingress" 2026-02-20 01:34:57.706447 | orchestrator | + ethertype = "IPv4" 2026-02-20 01:34:57.706456 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.706465 | orchestrator | + protocol = "udp" 2026-02-20 01:34:57.706474 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.706483 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-20 01:34:57.706493 | orchestrator | + remote_group_id = (known after apply) 2026-02-20 01:34:57.706504 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-20 01:34:57.706513 | orchestrator | + security_group_id = (known after apply) 2026-02-20 01:34:57.706522 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.706531 | orchestrator | } 2026-02-20 01:34:57.706541 | orchestrator | 2026-02-20 01:34:57.706550 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-20 01:34:57.706560 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-20 01:34:57.706568 | orchestrator | + direction = "ingress" 2026-02-20 01:34:57.706582 | orchestrator | + ethertype = "IPv4" 2026-02-20 01:34:57.706591 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.706600 | orchestrator | + protocol = "icmp" 2026-02-20 01:34:57.706610 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.706618 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-20 01:34:57.706627 | orchestrator | + remote_group_id = (known after apply) 2026-02-20 01:34:57.706637 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-20 01:34:57.706646 | orchestrator | + security_group_id = (known after apply) 2026-02-20 01:34:57.706656 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.706670 | orchestrator | } 2026-02-20 01:34:57.706680 | orchestrator | 2026-02-20 01:34:57.706689 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-20 01:34:57.706699 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-20 01:34:57.706709 | orchestrator | + description = "vrrp" 2026-02-20 01:34:57.706718 | orchestrator | + direction = "ingress" 2026-02-20 01:34:57.706727 | orchestrator | + ethertype = "IPv4" 2026-02-20 01:34:57.706736 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.706745 | orchestrator | + protocol = "112" 2026-02-20 01:34:57.706754 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.706763 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-20 01:34:57.706772 | orchestrator | + remote_group_id = (known after apply) 2026-02-20 01:34:57.706781 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-20 01:34:57.706791 | orchestrator | + security_group_id = (known after apply) 2026-02-20 01:34:57.706800 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.706810 | orchestrator | } 2026-02-20 01:34:57.706819 | orchestrator | 2026-02-20 01:34:57.706828 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-20 01:34:57.706837 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-20 01:34:57.706846 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.706855 | orchestrator | + description = "management security group" 2026-02-20 01:34:57.706864 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.706873 | orchestrator | + name = "testbed-management" 2026-02-20 01:34:57.706882 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.706891 | orchestrator | + stateful = (known after apply) 2026-02-20 01:34:57.706897 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.706902 | orchestrator | } 2026-02-20 01:34:57.706907 | orchestrator | 2026-02-20 01:34:57.706913 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-20 01:34:57.706918 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-20 01:34:57.706923 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.706929 | orchestrator | + description = "node security group" 2026-02-20 01:34:57.706934 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.706939 | orchestrator | + name = "testbed-node" 2026-02-20 01:34:57.706945 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.706950 | orchestrator | + stateful = (known after apply) 2026-02-20 01:34:57.706955 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.706960 | orchestrator | } 2026-02-20 01:34:57.706966 | orchestrator | 2026-02-20 01:34:57.706971 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-20 01:34:57.706977 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-20 01:34:57.706982 | orchestrator | + all_tags = (known after apply) 2026-02-20 01:34:57.706987 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-20 01:34:57.707008 | orchestrator | + dns_nameservers = [ 2026-02-20 01:34:57.707017 | orchestrator | + "8.8.8.8", 2026-02-20 01:34:57.707023 | orchestrator | + "9.9.9.9", 2026-02-20 01:34:57.707028 | orchestrator | ] 2026-02-20 01:34:57.707034 | orchestrator | + enable_dhcp = true 2026-02-20 01:34:57.707039 | orchestrator | + gateway_ip = (known after apply) 2026-02-20 01:34:57.707045 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.707050 | orchestrator | + ip_version = 4 2026-02-20 01:34:57.707055 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-20 01:34:57.707061 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-20 01:34:57.707066 | orchestrator | + name = "subnet-testbed-management" 2026-02-20 01:34:57.707071 | orchestrator | + network_id = (known after apply) 2026-02-20 01:34:57.707077 | orchestrator | + no_gateway = false 2026-02-20 01:34:57.707082 | orchestrator | + region = (known after apply) 2026-02-20 01:34:57.707087 | orchestrator | + service_types = (known after apply) 2026-02-20 01:34:57.707097 | orchestrator | + tenant_id = (known after apply) 2026-02-20 01:34:57.707102 | orchestrator | 2026-02-20 01:34:57.707108 | orchestrator | + allocation_pool { 2026-02-20 01:34:57.707113 | orchestrator | + end = "192.168.31.250" 2026-02-20 01:34:57.707118 | orchestrator | + start = "192.168.31.200" 2026-02-20 01:34:57.707124 | orchestrator | } 2026-02-20 01:34:57.707129 | orchestrator | } 2026-02-20 01:34:57.707134 | orchestrator | 2026-02-20 01:34:57.707140 | orchestrator | # terraform_data.image will be created 2026-02-20 01:34:57.707145 | orchestrator | + resource "terraform_data" "image" { 2026-02-20 01:34:57.707150 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.707157 | orchestrator | + input = "Ubuntu 24.04" 2026-02-20 01:34:57.707166 | orchestrator | + output = (known after apply) 2026-02-20 01:34:57.707175 | orchestrator | } 2026-02-20 01:34:57.707184 | orchestrator | 2026-02-20 01:34:57.707193 | orchestrator | # terraform_data.image_node will be created 2026-02-20 01:34:57.707238 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-20 01:34:57.707255 | orchestrator | + id = (known after apply) 2026-02-20 01:34:57.707265 | orchestrator | + input = "Ubuntu 24.04" 2026-02-20 01:34:57.707275 | orchestrator | + output = (known after apply) 2026-02-20 01:34:57.707285 | orchestrator | } 2026-02-20 01:34:57.707295 | orchestrator | 2026-02-20 01:34:57.707307 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-20 01:34:57.707317 | orchestrator | 2026-02-20 01:34:57.707327 | orchestrator | Changes to Outputs: 2026-02-20 01:34:57.707333 | orchestrator | + manager_address = (sensitive value) 2026-02-20 01:34:57.707339 | orchestrator | + private_key = (sensitive value) 2026-02-20 01:34:57.867058 | orchestrator | terraform_data.image: Creating... 2026-02-20 01:34:57.867673 | orchestrator | terraform_data.image_node: Creating... 2026-02-20 01:34:57.867799 | orchestrator | terraform_data.image: Creation complete after 0s [id=e6b0b6d6-b32a-0898-538d-94e824c302bf] 2026-02-20 01:34:57.955472 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=8e72a0ee-2105-267d-42d4-9f1b49dad0c0] 2026-02-20 01:34:57.977964 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-20 01:34:57.982101 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-20 01:34:57.985617 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-20 01:34:57.989866 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-20 01:34:57.993699 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-20 01:34:57.993728 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-20 01:34:57.993733 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-20 01:34:57.993737 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-20 01:34:57.993741 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-20 01:34:57.995782 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-20 01:34:58.468654 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-20 01:34:58.470548 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-20 01:34:58.473580 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-20 01:34:58.478258 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-20 01:34:58.503544 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-02-20 01:34:58.510365 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-20 01:34:59.085375 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=dbf629bb-6416-4059-9abe-bd47930e74af] 2026-02-20 01:34:59.094705 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-20 01:35:01.594691 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=71e39072-aa44-4a66-a05c-ec4b85d3c9c8] 2026-02-20 01:35:01.598602 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-20 01:35:01.621366 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=528e4f8d-abfb-4f6b-8b31-c44acf335289] 2026-02-20 01:35:01.627621 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-20 01:35:01.645171 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=f09aecfd-253e-43ea-a63d-1297b744a3ca] 2026-02-20 01:35:01.651159 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=65f3eac9-8b9d-466c-8b14-5677fbc93ea2] 2026-02-20 01:35:01.653641 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-20 01:35:01.658825 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-20 01:35:01.671762 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=4d1d8767-3f9a-4df7-a383-889dd3aae737] 2026-02-20 01:35:01.672419 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=6dc65ba2-ebcf-4c2d-a294-11a042e511e6] 2026-02-20 01:35:01.678704 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-20 01:35:01.683543 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-20 01:35:01.706982 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=7c48204c-9f75-4242-ad46-06da15902d57] 2026-02-20 01:35:01.718780 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-20 01:35:01.724818 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=35df89d5-061b-439b-8792-1a54b4ca06e9] 2026-02-20 01:35:01.728479 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=f24e8b7dbe63f61fce07dc05b1b58c5a16381263] 2026-02-20 01:35:01.730752 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=072c6774-113a-4ca1-a8e7-4c165b03fe25] 2026-02-20 01:35:01.732065 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-20 01:35:01.737133 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-20 01:35:01.741020 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=459d1643b6434503352b86f0738c64f58dbe19b4] 2026-02-20 01:35:02.423559 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=0c1d2133-543d-47a1-9a8f-77b9e889b460] 2026-02-20 01:35:02.557497 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=ddfd2044-0853-4c4d-a6fc-a1eb054cd1dc] 2026-02-20 01:35:02.566403 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-20 01:35:04.967518 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=be990183-125b-4ff4-addd-12788a17416c] 2026-02-20 01:35:05.018296 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=6a45b1b5-2fa2-48ee-bac2-1b370ef97102] 2026-02-20 01:35:05.041756 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=801ae611-6693-4495-a7bb-f144e2a48178] 2026-02-20 01:35:05.067459 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=d7eff79e-3548-4942-90ff-36a0d3bc2152] 2026-02-20 01:35:05.079915 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=d0ac2488-5320-49f6-a574-46dd8e496aa4] 2026-02-20 01:35:05.097561 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=3bf70d99-e61c-4837-83b3-53782c1e170c] 2026-02-20 01:35:05.700439 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=dca6d54b-c811-487b-9967-64a0e1fe2bd0] 2026-02-20 01:35:05.704225 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-20 01:35:05.704923 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-20 01:35:05.707330 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-20 01:35:05.903571 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=abd03d92-17be-48c3-85d7-8deb2e55e0ca] 2026-02-20 01:35:05.919059 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-20 01:35:05.927170 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-20 01:35:05.927224 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-20 01:35:05.927248 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-20 01:35:05.927256 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-20 01:35:05.927264 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-20 01:35:05.927271 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-20 01:35:05.929690 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-20 01:35:05.950921 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=13339cd1-e8ba-4db9-b19c-27ed91c174d7] 2026-02-20 01:35:05.961235 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-20 01:35:06.238324 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=f51b4664-d090-4646-b468-8039b25099bb] 2026-02-20 01:35:06.252313 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-20 01:35:06.565933 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=45b62b06-fcef-4a93-a588-df950fd42e6a] 2026-02-20 01:35:06.572104 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=56de1908-bfc7-426c-80de-57152aca1070] 2026-02-20 01:35:06.573090 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-20 01:35:06.581845 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-20 01:35:06.604423 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=49a59431-5189-4d26-ba27-c0613864cfbc] 2026-02-20 01:35:06.607633 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=45f94851-f753-4b29-9bfc-ef221dcc0499] 2026-02-20 01:35:06.613044 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-20 01:35:06.616711 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-20 01:35:06.625780 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=d753ba4d-5bc5-4376-97d9-3983fd0072c1] 2026-02-20 01:35:06.632600 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-20 01:35:06.721897 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=f0b0c6a1-3852-4c04-b934-e50c10b42068] 2026-02-20 01:35:06.728174 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-20 01:35:06.851597 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=260e4e87-2983-4dc6-8e58-16d987ab721c] 2026-02-20 01:35:06.854669 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=6b592d86-60bb-4802-ac8e-b945ef9906e0] 2026-02-20 01:35:06.873986 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=51340504-fcf1-4ea3-ab7c-40777f2a695d] 2026-02-20 01:35:06.908830 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=bacb8422-1256-4bbd-8df5-6340513b3e9c] 2026-02-20 01:35:07.004020 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=2e2a4e73-5150-4b01-a2b4-6e27888e2c19] 2026-02-20 01:35:07.066984 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=cdebda0a-213c-49ab-a7ff-e98b03267256] 2026-02-20 01:35:07.121753 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=971fe8e2-50a4-43d7-9343-3a8f44a83d7c] 2026-02-20 01:35:07.209543 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=2fc26b0e-3c0a-4f29-a2c7-1fa7e096fe50] 2026-02-20 01:35:07.355764 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=86ec22e1-5650-47dc-8b93-f132c8e03d21] 2026-02-20 01:35:07.703544 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=d89fb390-b317-499e-9b89-74413d5caeb7] 2026-02-20 01:35:07.731589 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-20 01:35:07.741810 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-20 01:35:07.741979 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-20 01:35:07.743892 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-20 01:35:07.744333 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-20 01:35:07.755308 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-20 01:35:07.755761 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-20 01:35:09.779956 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=abd2655d-01bb-4c3d-bb5c-7fffcd9bd239] 2026-02-20 01:35:09.789413 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-20 01:35:09.795090 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-20 01:35:09.798658 | orchestrator | local_file.inventory: Creating... 2026-02-20 01:35:09.800742 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=a44dce13db5b103ec876c3e594541fa3d9b0647c] 2026-02-20 01:35:09.804641 | orchestrator | local_file.inventory: Creation complete after 0s [id=8f09b07d666c1bcc5bdacec9c05aa9bd3d092506] 2026-02-20 01:35:10.515719 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=abd2655d-01bb-4c3d-bb5c-7fffcd9bd239] 2026-02-20 01:35:17.744419 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-20 01:35:17.744541 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-20 01:35:17.745599 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-20 01:35:17.745714 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-20 01:35:17.758228 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-20 01:35:17.758296 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-20 01:35:27.745689 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-20 01:35:27.745780 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-20 01:35:27.745788 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-20 01:35:27.745797 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-20 01:35:27.759489 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-20 01:35:27.759597 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-20 01:35:28.594762 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=b708b65d-b610-40fd-bc81-0d2313e47ce9] 2026-02-20 01:35:29.100220 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=67db89a1-4700-49f9-ad3f-6f9f27d4fe7b] 2026-02-20 01:35:29.223115 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=559627f5-344c-4d5d-a77f-fed9532b0a40] 2026-02-20 01:35:37.751897 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-02-20 01:35:37.752071 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-02-20 01:35:37.752102 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-02-20 01:35:38.148969 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 30s [id=2cbc2796-472d-474c-9187-9b1d36ec8b30] 2026-02-20 01:35:38.736691 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=1aeb11ac-b9eb-4b87-ab85-e837a353116c] 2026-02-20 01:35:39.006631 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=e5e6dfe0-6d58-4cdc-8799-cbd03e40fd3e] 2026-02-20 01:35:39.029450 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-20 01:35:39.033977 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=6299940460103204943] 2026-02-20 01:35:39.035201 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-20 01:35:39.035890 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-20 01:35:39.038150 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-20 01:35:39.038726 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-20 01:35:39.050834 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-20 01:35:39.051586 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-20 01:35:39.056646 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-20 01:35:39.068252 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-20 01:35:39.071729 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-20 01:35:39.075597 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-20 01:35:42.411629 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=67db89a1-4700-49f9-ad3f-6f9f27d4fe7b/f09aecfd-253e-43ea-a63d-1297b744a3ca] 2026-02-20 01:35:42.460766 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=b708b65d-b610-40fd-bc81-0d2313e47ce9/072c6774-113a-4ca1-a8e7-4c165b03fe25] 2026-02-20 01:35:42.472212 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=2cbc2796-472d-474c-9187-9b1d36ec8b30/71e39072-aa44-4a66-a05c-ec4b85d3c9c8] 2026-02-20 01:35:42.483248 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=67db89a1-4700-49f9-ad3f-6f9f27d4fe7b/528e4f8d-abfb-4f6b-8b31-c44acf335289] 2026-02-20 01:35:42.488226 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=2cbc2796-472d-474c-9187-9b1d36ec8b30/35df89d5-061b-439b-8792-1a54b4ca06e9] 2026-02-20 01:35:42.520857 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=b708b65d-b610-40fd-bc81-0d2313e47ce9/65f3eac9-8b9d-466c-8b14-5677fbc93ea2] 2026-02-20 01:35:48.574339 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=67db89a1-4700-49f9-ad3f-6f9f27d4fe7b/6dc65ba2-ebcf-4c2d-a294-11a042e511e6] 2026-02-20 01:35:48.624734 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=2cbc2796-472d-474c-9187-9b1d36ec8b30/7c48204c-9f75-4242-ad46-06da15902d57] 2026-02-20 01:35:48.656851 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=b708b65d-b610-40fd-bc81-0d2313e47ce9/4d1d8767-3f9a-4df7-a383-889dd3aae737] 2026-02-20 01:35:49.078478 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-20 01:35:59.079726 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-20 01:35:59.431145 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=1f6722e3-e41d-457a-a525-90073426014d] 2026-02-20 01:35:59.453660 | orchestrator | 2026-02-20 01:35:59.453741 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-20 01:35:59.453750 | orchestrator | 2026-02-20 01:35:59.453757 | orchestrator | Outputs: 2026-02-20 01:35:59.453764 | orchestrator | 2026-02-20 01:35:59.453771 | orchestrator | manager_address = 2026-02-20 01:35:59.453778 | orchestrator | private_key = 2026-02-20 01:35:59.655512 | orchestrator | ok: Runtime: 0:01:09.432407 2026-02-20 01:35:59.687283 | 2026-02-20 01:35:59.687408 | TASK [Fetch manager address] 2026-02-20 01:36:00.158577 | orchestrator | ok 2026-02-20 01:36:00.169364 | 2026-02-20 01:36:00.169534 | TASK [Set manager_host address] 2026-02-20 01:36:00.254019 | orchestrator | ok 2026-02-20 01:36:00.265345 | 2026-02-20 01:36:00.265509 | LOOP [Update ansible collections] 2026-02-20 01:36:02.001140 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-20 01:36:02.001553 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-20 01:36:02.001617 | orchestrator | Starting galaxy collection install process 2026-02-20 01:36:02.001660 | orchestrator | Process install dependency map 2026-02-20 01:36:02.001697 | orchestrator | Starting collection install process 2026-02-20 01:36:02.001731 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-02-20 01:36:02.001770 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-02-20 01:36:02.001809 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-20 01:36:02.001882 | orchestrator | ok: Item: commons Runtime: 0:00:01.401885 2026-02-20 01:36:03.139373 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-20 01:36:03.139549 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-20 01:36:03.139604 | orchestrator | Starting galaxy collection install process 2026-02-20 01:36:03.139645 | orchestrator | Process install dependency map 2026-02-20 01:36:03.139681 | orchestrator | Starting collection install process 2026-02-20 01:36:03.139716 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-02-20 01:36:03.139751 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-02-20 01:36:03.139784 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-20 01:36:03.139833 | orchestrator | ok: Item: services Runtime: 0:00:00.844755 2026-02-20 01:36:03.155208 | 2026-02-20 01:36:03.155331 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-20 01:36:13.735476 | orchestrator | ok 2026-02-20 01:36:13.746250 | 2026-02-20 01:36:13.746363 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-20 01:37:13.795839 | orchestrator | ok 2026-02-20 01:37:13.806311 | 2026-02-20 01:37:13.806452 | TASK [Fetch manager ssh hostkey] 2026-02-20 01:37:15.382326 | orchestrator | Output suppressed because no_log was given 2026-02-20 01:37:15.397378 | 2026-02-20 01:37:15.397593 | TASK [Get ssh keypair from terraform environment] 2026-02-20 01:37:15.932986 | orchestrator | ok: Runtime: 0:00:00.007089 2026-02-20 01:37:15.947922 | 2026-02-20 01:37:15.948071 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-20 01:37:15.995678 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-20 01:37:16.006010 | 2026-02-20 01:37:16.006145 | TASK [Run manager part 0] 2026-02-20 01:37:17.213114 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-20 01:37:17.267490 | orchestrator | 2026-02-20 01:37:17.267547 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-20 01:37:17.267555 | orchestrator | 2026-02-20 01:37:17.267591 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-20 01:37:19.489850 | orchestrator | ok: [testbed-manager] 2026-02-20 01:37:19.489889 | orchestrator | 2026-02-20 01:37:19.489917 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-20 01:37:19.489930 | orchestrator | 2026-02-20 01:37:19.489942 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-20 01:37:21.498745 | orchestrator | ok: [testbed-manager] 2026-02-20 01:37:21.498814 | orchestrator | 2026-02-20 01:37:21.498838 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-20 01:37:22.242612 | orchestrator | ok: [testbed-manager] 2026-02-20 01:37:22.242660 | orchestrator | 2026-02-20 01:37:22.242668 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-20 01:37:22.284208 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:37:22.284262 | orchestrator | 2026-02-20 01:37:22.284273 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-20 01:37:22.315191 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:37:22.315239 | orchestrator | 2026-02-20 01:37:22.315248 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-20 01:37:22.348551 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:37:22.348596 | orchestrator | 2026-02-20 01:37:22.348602 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-20 01:37:22.381014 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:37:22.381070 | orchestrator | 2026-02-20 01:37:22.381080 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-20 01:37:22.408520 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:37:22.408573 | orchestrator | 2026-02-20 01:37:22.408585 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-20 01:37:22.443340 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:37:22.443391 | orchestrator | 2026-02-20 01:37:22.443403 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-20 01:37:22.479403 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:37:22.479461 | orchestrator | 2026-02-20 01:37:22.479473 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-20 01:37:23.252373 | orchestrator | changed: [testbed-manager] 2026-02-20 01:37:23.252419 | orchestrator | 2026-02-20 01:37:23.252426 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-20 01:40:10.876103 | orchestrator | changed: [testbed-manager] 2026-02-20 01:40:10.876168 | orchestrator | 2026-02-20 01:40:10.876181 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-20 01:41:49.965027 | orchestrator | changed: [testbed-manager] 2026-02-20 01:41:49.965076 | orchestrator | 2026-02-20 01:41:49.965084 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-20 01:42:11.846964 | orchestrator | changed: [testbed-manager] 2026-02-20 01:42:11.847057 | orchestrator | 2026-02-20 01:42:11.847075 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-20 01:42:21.748134 | orchestrator | changed: [testbed-manager] 2026-02-20 01:42:21.748236 | orchestrator | 2026-02-20 01:42:21.748263 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-20 01:42:21.799555 | orchestrator | ok: [testbed-manager] 2026-02-20 01:42:21.799658 | orchestrator | 2026-02-20 01:42:21.799682 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-20 01:42:22.679845 | orchestrator | ok: [testbed-manager] 2026-02-20 01:42:22.679885 | orchestrator | 2026-02-20 01:42:22.679900 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-20 01:42:23.457263 | orchestrator | changed: [testbed-manager] 2026-02-20 01:42:23.457383 | orchestrator | 2026-02-20 01:42:23.457412 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-20 01:42:30.220897 | orchestrator | changed: [testbed-manager] 2026-02-20 01:42:30.220998 | orchestrator | 2026-02-20 01:42:30.221045 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-20 01:42:36.750755 | orchestrator | changed: [testbed-manager] 2026-02-20 01:42:36.750795 | orchestrator | 2026-02-20 01:42:36.750806 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-20 01:42:39.587591 | orchestrator | changed: [testbed-manager] 2026-02-20 01:42:39.587634 | orchestrator | 2026-02-20 01:42:39.587642 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-20 01:42:41.560010 | orchestrator | changed: [testbed-manager] 2026-02-20 01:42:41.560673 | orchestrator | 2026-02-20 01:42:41.560706 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-20 01:42:42.671732 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-20 01:42:42.671828 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-20 01:42:42.671843 | orchestrator | 2026-02-20 01:42:42.671855 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-20 01:42:42.716224 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-20 01:42:42.716277 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-20 01:42:42.716283 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-20 01:42:42.716288 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-20 01:42:47.059821 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-20 01:42:47.059888 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-20 01:42:47.059902 | orchestrator | 2026-02-20 01:42:47.059915 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-20 01:42:47.694390 | orchestrator | changed: [testbed-manager] 2026-02-20 01:42:47.694472 | orchestrator | 2026-02-20 01:42:47.694488 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-20 01:43:05.630409 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-20 01:43:05.630518 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-20 01:43:05.630546 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-20 01:43:05.630593 | orchestrator | 2026-02-20 01:43:05.630615 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-20 01:43:08.137849 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-20 01:43:08.137925 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-20 01:43:08.137940 | orchestrator | 2026-02-20 01:43:08.137951 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-20 01:43:08.137962 | orchestrator | 2026-02-20 01:43:08.137972 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-20 01:43:09.640152 | orchestrator | ok: [testbed-manager] 2026-02-20 01:43:09.640250 | orchestrator | 2026-02-20 01:43:09.640280 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-20 01:43:09.690113 | orchestrator | ok: [testbed-manager] 2026-02-20 01:43:09.690191 | orchestrator | 2026-02-20 01:43:09.690205 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-20 01:43:09.756621 | orchestrator | ok: [testbed-manager] 2026-02-20 01:43:09.756697 | orchestrator | 2026-02-20 01:43:09.756712 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-20 01:43:10.548325 | orchestrator | changed: [testbed-manager] 2026-02-20 01:43:10.548398 | orchestrator | 2026-02-20 01:43:10.548418 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-20 01:43:11.287746 | orchestrator | changed: [testbed-manager] 2026-02-20 01:43:11.287785 | orchestrator | 2026-02-20 01:43:11.287794 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-20 01:43:12.741133 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-20 01:43:12.741180 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-20 01:43:12.741187 | orchestrator | 2026-02-20 01:43:12.741201 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-20 01:43:14.121501 | orchestrator | changed: [testbed-manager] 2026-02-20 01:43:14.121582 | orchestrator | 2026-02-20 01:43:14.121594 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-20 01:43:15.892056 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-20 01:43:15.892526 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-20 01:43:15.892553 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-20 01:43:15.892565 | orchestrator | 2026-02-20 01:43:15.892619 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-20 01:43:15.947662 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:43:15.947755 | orchestrator | 2026-02-20 01:43:15.947779 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-20 01:43:16.021272 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:43:16.021319 | orchestrator | 2026-02-20 01:43:16.021329 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-20 01:43:16.564303 | orchestrator | changed: [testbed-manager] 2026-02-20 01:43:16.564394 | orchestrator | 2026-02-20 01:43:16.564419 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-20 01:43:16.645488 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:43:16.645593 | orchestrator | 2026-02-20 01:43:16.645618 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-20 01:43:17.513445 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-20 01:43:17.513505 | orchestrator | changed: [testbed-manager] 2026-02-20 01:43:17.513516 | orchestrator | 2026-02-20 01:43:17.513524 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-20 01:43:17.554942 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:43:17.555037 | orchestrator | 2026-02-20 01:43:17.555067 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-20 01:43:17.590617 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:43:17.590695 | orchestrator | 2026-02-20 01:43:17.590711 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-20 01:43:17.632228 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:43:17.632293 | orchestrator | 2026-02-20 01:43:17.632307 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-20 01:43:17.702621 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:43:17.702720 | orchestrator | 2026-02-20 01:43:17.702746 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-20 01:43:18.428154 | orchestrator | ok: [testbed-manager] 2026-02-20 01:43:18.428202 | orchestrator | 2026-02-20 01:43:18.428209 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-20 01:43:18.428215 | orchestrator | 2026-02-20 01:43:18.428220 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-20 01:43:19.883810 | orchestrator | ok: [testbed-manager] 2026-02-20 01:43:19.883879 | orchestrator | 2026-02-20 01:43:19.883892 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-20 01:43:20.837041 | orchestrator | changed: [testbed-manager] 2026-02-20 01:43:20.837118 | orchestrator | 2026-02-20 01:43:20.837133 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 01:43:20.837145 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-20 01:43:20.837155 | orchestrator | 2026-02-20 01:43:21.277458 | orchestrator | ok: Runtime: 0:06:04.560763 2026-02-20 01:43:21.297767 | 2026-02-20 01:43:21.297937 | TASK [Point out that the log in on the manager is now possible] 2026-02-20 01:43:21.346194 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-20 01:43:21.356915 | 2026-02-20 01:43:21.357049 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-20 01:43:21.391841 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-20 01:43:21.401248 | 2026-02-20 01:43:21.401379 | TASK [Run manager part 1 + 2] 2026-02-20 01:43:22.208310 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-20 01:43:22.279866 | orchestrator | 2026-02-20 01:43:22.279906 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-20 01:43:22.279913 | orchestrator | 2026-02-20 01:43:22.279925 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-20 01:43:24.796793 | orchestrator | ok: [testbed-manager] 2026-02-20 01:43:24.796832 | orchestrator | 2026-02-20 01:43:24.796851 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-20 01:43:24.833486 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:43:24.833528 | orchestrator | 2026-02-20 01:43:24.833537 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-20 01:43:24.880275 | orchestrator | ok: [testbed-manager] 2026-02-20 01:43:24.880326 | orchestrator | 2026-02-20 01:43:24.880335 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-20 01:43:24.926711 | orchestrator | ok: [testbed-manager] 2026-02-20 01:43:24.926762 | orchestrator | 2026-02-20 01:43:24.926772 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-20 01:43:25.002100 | orchestrator | ok: [testbed-manager] 2026-02-20 01:43:25.002159 | orchestrator | 2026-02-20 01:43:25.002173 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-20 01:43:25.061661 | orchestrator | ok: [testbed-manager] 2026-02-20 01:43:25.061705 | orchestrator | 2026-02-20 01:43:25.061715 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-20 01:43:25.108743 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-20 01:43:25.108789 | orchestrator | 2026-02-20 01:43:25.108797 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-20 01:43:25.854113 | orchestrator | ok: [testbed-manager] 2026-02-20 01:43:25.854192 | orchestrator | 2026-02-20 01:43:25.854210 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-20 01:43:25.904238 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:43:25.904315 | orchestrator | 2026-02-20 01:43:25.904335 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-20 01:43:27.628490 | orchestrator | changed: [testbed-manager] 2026-02-20 01:43:27.628584 | orchestrator | 2026-02-20 01:43:27.628642 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-20 01:43:28.260191 | orchestrator | ok: [testbed-manager] 2026-02-20 01:43:28.260276 | orchestrator | 2026-02-20 01:43:28.260294 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-20 01:43:29.503648 | orchestrator | changed: [testbed-manager] 2026-02-20 01:43:29.503757 | orchestrator | 2026-02-20 01:43:29.503782 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-20 01:43:45.915514 | orchestrator | changed: [testbed-manager] 2026-02-20 01:43:45.915653 | orchestrator | 2026-02-20 01:43:45.915666 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-20 01:43:46.620873 | orchestrator | ok: [testbed-manager] 2026-02-20 01:43:46.620975 | orchestrator | 2026-02-20 01:43:46.621003 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-20 01:43:46.670181 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:43:46.670261 | orchestrator | 2026-02-20 01:43:46.670277 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-20 01:43:47.653608 | orchestrator | changed: [testbed-manager] 2026-02-20 01:43:47.653721 | orchestrator | 2026-02-20 01:43:47.653746 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-20 01:43:48.690164 | orchestrator | changed: [testbed-manager] 2026-02-20 01:43:48.690249 | orchestrator | 2026-02-20 01:43:48.690264 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-20 01:43:49.300765 | orchestrator | changed: [testbed-manager] 2026-02-20 01:43:49.300857 | orchestrator | 2026-02-20 01:43:49.300878 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-20 01:43:49.341694 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-20 01:43:49.341781 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-20 01:43:49.341796 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-20 01:43:49.341807 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-20 01:43:52.518944 | orchestrator | changed: [testbed-manager] 2026-02-20 01:43:52.519031 | orchestrator | 2026-02-20 01:43:52.519043 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-20 01:44:02.741061 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-20 01:44:02.741161 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-20 01:44:02.741180 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-20 01:44:02.741192 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-20 01:44:02.741213 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-20 01:44:02.741225 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-20 01:44:02.741236 | orchestrator | 2026-02-20 01:44:02.741249 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-20 01:44:03.848182 | orchestrator | changed: [testbed-manager] 2026-02-20 01:44:03.848246 | orchestrator | 2026-02-20 01:44:03.848261 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-20 01:44:03.891836 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:44:03.891917 | orchestrator | 2026-02-20 01:44:03.891929 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-20 01:44:07.018880 | orchestrator | changed: [testbed-manager] 2026-02-20 01:44:07.018949 | orchestrator | 2026-02-20 01:44:07.018958 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-20 01:44:07.058422 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:44:07.058508 | orchestrator | 2026-02-20 01:44:07.058522 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-20 01:46:02.621592 | orchestrator | changed: [testbed-manager] 2026-02-20 01:46:02.621716 | orchestrator | 2026-02-20 01:46:02.621752 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-20 01:46:03.970735 | orchestrator | ok: [testbed-manager] 2026-02-20 01:46:03.970875 | orchestrator | 2026-02-20 01:46:03.970906 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 01:46:03.970929 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-20 01:46:03.970949 | orchestrator | 2026-02-20 01:46:04.533438 | orchestrator | ok: Runtime: 0:02:42.386471 2026-02-20 01:46:04.542063 | 2026-02-20 01:46:04.542192 | TASK [Reboot manager] 2026-02-20 01:46:06.075469 | orchestrator | ok: Runtime: 0:00:01.080726 2026-02-20 01:46:06.093026 | 2026-02-20 01:46:06.093174 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-20 01:46:22.831887 | orchestrator | ok 2026-02-20 01:46:22.840965 | 2026-02-20 01:46:22.841075 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-20 01:47:22.885308 | orchestrator | ok 2026-02-20 01:47:22.895328 | 2026-02-20 01:47:22.895462 | TASK [Deploy manager + bootstrap nodes] 2026-02-20 01:47:25.801965 | orchestrator | 2026-02-20 01:47:25.802174 | orchestrator | # DEPLOY MANAGER 2026-02-20 01:47:25.802193 | orchestrator | 2026-02-20 01:47:25.802204 | orchestrator | + set -e 2026-02-20 01:47:25.802213 | orchestrator | + echo 2026-02-20 01:47:25.802224 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-20 01:47:25.802236 | orchestrator | + echo 2026-02-20 01:47:25.802274 | orchestrator | + cat /opt/manager-vars.sh 2026-02-20 01:47:25.806091 | orchestrator | export NUMBER_OF_NODES=6 2026-02-20 01:47:25.806177 | orchestrator | 2026-02-20 01:47:25.806191 | orchestrator | export CEPH_VERSION=reef 2026-02-20 01:47:25.806202 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-20 01:47:25.806213 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-20 01:47:25.806234 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-20 01:47:25.806243 | orchestrator | 2026-02-20 01:47:25.806258 | orchestrator | export ARA=false 2026-02-20 01:47:25.806268 | orchestrator | export DEPLOY_MODE=manager 2026-02-20 01:47:25.806289 | orchestrator | export TEMPEST=false 2026-02-20 01:47:25.806305 | orchestrator | export IS_ZUUL=true 2026-02-20 01:47:25.806318 | orchestrator | 2026-02-20 01:47:25.806348 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 01:47:25.806366 | orchestrator | export EXTERNAL_API=false 2026-02-20 01:47:25.806381 | orchestrator | 2026-02-20 01:47:25.806396 | orchestrator | export IMAGE_USER=ubuntu 2026-02-20 01:47:25.806416 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-20 01:47:25.806431 | orchestrator | 2026-02-20 01:47:25.806447 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-20 01:47:25.806474 | orchestrator | 2026-02-20 01:47:25.806490 | orchestrator | + echo 2026-02-20 01:47:25.806506 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-20 01:47:25.807828 | orchestrator | ++ export INTERACTIVE=false 2026-02-20 01:47:25.807870 | orchestrator | ++ INTERACTIVE=false 2026-02-20 01:47:25.807886 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-20 01:47:25.807958 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-20 01:47:25.807975 | orchestrator | + source /opt/manager-vars.sh 2026-02-20 01:47:25.807990 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-20 01:47:25.808005 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-20 01:47:25.808020 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-20 01:47:25.808030 | orchestrator | ++ CEPH_VERSION=reef 2026-02-20 01:47:25.808039 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-20 01:47:25.808049 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-20 01:47:25.808058 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-20 01:47:25.808073 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-20 01:47:25.808082 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-20 01:47:25.808102 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-20 01:47:25.808111 | orchestrator | ++ export ARA=false 2026-02-20 01:47:25.808120 | orchestrator | ++ ARA=false 2026-02-20 01:47:25.808129 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-20 01:47:25.808137 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-20 01:47:25.808146 | orchestrator | ++ export TEMPEST=false 2026-02-20 01:47:25.808154 | orchestrator | ++ TEMPEST=false 2026-02-20 01:47:25.808163 | orchestrator | ++ export IS_ZUUL=true 2026-02-20 01:47:25.808171 | orchestrator | ++ IS_ZUUL=true 2026-02-20 01:47:25.808180 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 01:47:25.808189 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 01:47:25.808198 | orchestrator | ++ export EXTERNAL_API=false 2026-02-20 01:47:25.808206 | orchestrator | ++ EXTERNAL_API=false 2026-02-20 01:47:25.808214 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-20 01:47:25.808223 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-20 01:47:25.808232 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-20 01:47:25.808240 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-20 01:47:25.808249 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-20 01:47:25.808257 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-20 01:47:25.808266 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-20 01:47:25.864182 | orchestrator | + docker version 2026-02-20 01:47:25.973618 | orchestrator | Client: Docker Engine - Community 2026-02-20 01:47:25.973749 | orchestrator | Version: 27.5.1 2026-02-20 01:47:25.973765 | orchestrator | API version: 1.47 2026-02-20 01:47:25.973773 | orchestrator | Go version: go1.22.11 2026-02-20 01:47:25.973780 | orchestrator | Git commit: 9f9e405 2026-02-20 01:47:25.973788 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-20 01:47:25.973796 | orchestrator | OS/Arch: linux/amd64 2026-02-20 01:47:25.973803 | orchestrator | Context: default 2026-02-20 01:47:25.973810 | orchestrator | 2026-02-20 01:47:25.973818 | orchestrator | Server: Docker Engine - Community 2026-02-20 01:47:25.973826 | orchestrator | Engine: 2026-02-20 01:47:25.973844 | orchestrator | Version: 27.5.1 2026-02-20 01:47:25.973853 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-20 01:47:25.973887 | orchestrator | Go version: go1.22.11 2026-02-20 01:47:25.973931 | orchestrator | Git commit: 4c9b3b0 2026-02-20 01:47:25.973938 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-20 01:47:25.973944 | orchestrator | OS/Arch: linux/amd64 2026-02-20 01:47:25.973951 | orchestrator | Experimental: false 2026-02-20 01:47:25.973958 | orchestrator | containerd: 2026-02-20 01:47:25.973965 | orchestrator | Version: v2.2.1 2026-02-20 01:47:25.973971 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-20 01:47:25.973978 | orchestrator | runc: 2026-02-20 01:47:25.973984 | orchestrator | Version: 1.3.4 2026-02-20 01:47:25.973990 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-20 01:47:25.973998 | orchestrator | docker-init: 2026-02-20 01:47:25.974005 | orchestrator | Version: 0.19.0 2026-02-20 01:47:25.974013 | orchestrator | GitCommit: de40ad0 2026-02-20 01:47:25.977575 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-20 01:47:25.989074 | orchestrator | + set -e 2026-02-20 01:47:25.989197 | orchestrator | + source /opt/manager-vars.sh 2026-02-20 01:47:25.989207 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-20 01:47:25.989212 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-20 01:47:25.989217 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-20 01:47:25.989222 | orchestrator | ++ CEPH_VERSION=reef 2026-02-20 01:47:25.989226 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-20 01:47:25.989232 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-20 01:47:25.989237 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-20 01:47:25.989242 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-20 01:47:25.989247 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-20 01:47:25.989252 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-20 01:47:25.989256 | orchestrator | ++ export ARA=false 2026-02-20 01:47:25.989261 | orchestrator | ++ ARA=false 2026-02-20 01:47:25.989266 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-20 01:47:25.989270 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-20 01:47:25.989275 | orchestrator | ++ export TEMPEST=false 2026-02-20 01:47:25.989279 | orchestrator | ++ TEMPEST=false 2026-02-20 01:47:25.989284 | orchestrator | ++ export IS_ZUUL=true 2026-02-20 01:47:25.989288 | orchestrator | ++ IS_ZUUL=true 2026-02-20 01:47:25.989293 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 01:47:25.989298 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 01:47:25.989302 | orchestrator | ++ export EXTERNAL_API=false 2026-02-20 01:47:25.989307 | orchestrator | ++ EXTERNAL_API=false 2026-02-20 01:47:25.989332 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-20 01:47:25.989337 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-20 01:47:25.989342 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-20 01:47:25.989347 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-20 01:47:25.989361 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-20 01:47:25.989366 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-20 01:47:25.989370 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-20 01:47:25.989375 | orchestrator | ++ export INTERACTIVE=false 2026-02-20 01:47:25.989379 | orchestrator | ++ INTERACTIVE=false 2026-02-20 01:47:25.989384 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-20 01:47:25.989392 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-20 01:47:25.989784 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-20 01:47:25.989876 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-20 01:47:25.996239 | orchestrator | + set -e 2026-02-20 01:47:25.996328 | orchestrator | + VERSION=9.5.0 2026-02-20 01:47:25.996344 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-20 01:47:26.006971 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-20 01:47:26.007060 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-20 01:47:26.011485 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-20 01:47:26.017225 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-20 01:47:26.024885 | orchestrator | /opt/configuration ~ 2026-02-20 01:47:26.024969 | orchestrator | + set -e 2026-02-20 01:47:26.024978 | orchestrator | + pushd /opt/configuration 2026-02-20 01:47:26.024986 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-20 01:47:26.026680 | orchestrator | + source /opt/venv/bin/activate 2026-02-20 01:47:26.028747 | orchestrator | ++ deactivate nondestructive 2026-02-20 01:47:26.028791 | orchestrator | ++ '[' -n '' ']' 2026-02-20 01:47:26.028808 | orchestrator | ++ '[' -n '' ']' 2026-02-20 01:47:26.028850 | orchestrator | ++ hash -r 2026-02-20 01:47:26.028864 | orchestrator | ++ '[' -n '' ']' 2026-02-20 01:47:26.028878 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-20 01:47:26.028928 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-20 01:47:26.028942 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-20 01:47:26.028955 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-20 01:47:26.028967 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-20 01:47:26.028979 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-20 01:47:26.028992 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-20 01:47:26.029006 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-20 01:47:26.029018 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-20 01:47:26.029030 | orchestrator | ++ export PATH 2026-02-20 01:47:26.029043 | orchestrator | ++ '[' -n '' ']' 2026-02-20 01:47:26.029055 | orchestrator | ++ '[' -z '' ']' 2026-02-20 01:47:26.029068 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-20 01:47:26.029080 | orchestrator | ++ PS1='(venv) ' 2026-02-20 01:47:26.029092 | orchestrator | ++ export PS1 2026-02-20 01:47:26.029104 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-20 01:47:26.029115 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-20 01:47:26.029127 | orchestrator | ++ hash -r 2026-02-20 01:47:26.029140 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-20 01:47:27.507851 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-20 01:47:27.509039 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-20 01:47:27.510792 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-20 01:47:27.512614 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-20 01:47:27.514150 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-20 01:47:27.528002 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-20 01:47:27.529567 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-20 01:47:27.531010 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-20 01:47:27.533248 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-20 01:47:27.591618 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-20 01:47:27.594672 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-20 01:47:27.598132 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-20 01:47:27.600432 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-20 01:47:27.607968 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-20 01:47:27.905468 | orchestrator | ++ which gilt 2026-02-20 01:47:27.913060 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-20 01:47:27.913162 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-20 01:47:28.198179 | orchestrator | osism.cfg-generics: 2026-02-20 01:47:28.363095 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-20 01:47:28.363453 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-20 01:47:28.364191 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-20 01:47:28.364298 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-20 01:47:29.214367 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-20 01:47:29.227535 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-20 01:47:29.577612 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-20 01:47:29.633131 | orchestrator | ~ 2026-02-20 01:47:29.633228 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-20 01:47:29.633240 | orchestrator | + deactivate 2026-02-20 01:47:29.633249 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-20 01:47:29.633259 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-20 01:47:29.633266 | orchestrator | + export PATH 2026-02-20 01:47:29.633273 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-20 01:47:29.633280 | orchestrator | + '[' -n '' ']' 2026-02-20 01:47:29.633290 | orchestrator | + hash -r 2026-02-20 01:47:29.633297 | orchestrator | + '[' -n '' ']' 2026-02-20 01:47:29.633304 | orchestrator | + unset VIRTUAL_ENV 2026-02-20 01:47:29.633311 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-20 01:47:29.633319 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-20 01:47:29.633326 | orchestrator | + unset -f deactivate 2026-02-20 01:47:29.633333 | orchestrator | + popd 2026-02-20 01:47:29.633959 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-20 01:47:29.633975 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-20 01:47:29.635031 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-20 01:47:29.703407 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-20 01:47:29.703492 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-20 01:47:29.705128 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-20 01:47:29.779981 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-20 01:47:29.781203 | orchestrator | ++ semver 2024.2 2025.1 2026-02-20 01:47:29.846473 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-20 01:47:29.846585 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-20 01:47:29.946270 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-20 01:47:29.946397 | orchestrator | + source /opt/venv/bin/activate 2026-02-20 01:47:29.946419 | orchestrator | ++ deactivate nondestructive 2026-02-20 01:47:29.946435 | orchestrator | ++ '[' -n '' ']' 2026-02-20 01:47:29.946448 | orchestrator | ++ '[' -n '' ']' 2026-02-20 01:47:29.946461 | orchestrator | ++ hash -r 2026-02-20 01:47:29.946474 | orchestrator | ++ '[' -n '' ']' 2026-02-20 01:47:29.946487 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-20 01:47:29.946500 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-20 01:47:29.946511 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-20 01:47:29.946537 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-20 01:47:29.946551 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-20 01:47:29.946564 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-20 01:47:29.946578 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-20 01:47:29.946593 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-20 01:47:29.946629 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-20 01:47:29.946638 | orchestrator | ++ export PATH 2026-02-20 01:47:29.946645 | orchestrator | ++ '[' -n '' ']' 2026-02-20 01:47:29.946652 | orchestrator | ++ '[' -z '' ']' 2026-02-20 01:47:29.946659 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-20 01:47:29.946667 | orchestrator | ++ PS1='(venv) ' 2026-02-20 01:47:29.946674 | orchestrator | ++ export PS1 2026-02-20 01:47:29.946681 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-20 01:47:29.946688 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-20 01:47:29.946696 | orchestrator | ++ hash -r 2026-02-20 01:47:29.946704 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-20 01:47:31.306613 | orchestrator | 2026-02-20 01:47:31.306702 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-20 01:47:31.306710 | orchestrator | 2026-02-20 01:47:31.306715 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-20 01:47:31.947835 | orchestrator | ok: [testbed-manager] 2026-02-20 01:47:31.947954 | orchestrator | 2026-02-20 01:47:31.947968 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-20 01:47:32.984019 | orchestrator | changed: [testbed-manager] 2026-02-20 01:47:32.984121 | orchestrator | 2026-02-20 01:47:32.984137 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-20 01:47:32.984183 | orchestrator | 2026-02-20 01:47:32.984196 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-20 01:47:35.371890 | orchestrator | ok: [testbed-manager] 2026-02-20 01:47:35.372003 | orchestrator | 2026-02-20 01:47:35.372017 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-20 01:47:35.415084 | orchestrator | ok: [testbed-manager] 2026-02-20 01:47:35.415170 | orchestrator | 2026-02-20 01:47:35.415181 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-20 01:47:35.932716 | orchestrator | changed: [testbed-manager] 2026-02-20 01:47:35.932795 | orchestrator | 2026-02-20 01:47:35.932807 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-20 01:47:35.973978 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:47:35.974113 | orchestrator | 2026-02-20 01:47:35.974129 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-20 01:47:36.371894 | orchestrator | changed: [testbed-manager] 2026-02-20 01:47:36.372059 | orchestrator | 2026-02-20 01:47:36.372074 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-20 01:47:36.732844 | orchestrator | ok: [testbed-manager] 2026-02-20 01:47:36.732960 | orchestrator | 2026-02-20 01:47:36.732973 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-20 01:47:36.886698 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:47:36.886820 | orchestrator | 2026-02-20 01:47:36.886835 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-20 01:47:36.886846 | orchestrator | 2026-02-20 01:47:36.886856 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-20 01:47:38.695472 | orchestrator | ok: [testbed-manager] 2026-02-20 01:47:38.695607 | orchestrator | 2026-02-20 01:47:38.695624 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-20 01:47:38.813841 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-20 01:47:38.813973 | orchestrator | 2026-02-20 01:47:38.813993 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-20 01:47:38.875398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-20 01:47:38.875517 | orchestrator | 2026-02-20 01:47:38.875533 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-20 01:47:40.184126 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-20 01:47:40.184227 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-20 01:47:40.184240 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-20 01:47:40.184249 | orchestrator | 2026-02-20 01:47:40.184260 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-20 01:47:42.158901 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-20 01:47:42.159071 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-20 01:47:42.159078 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-20 01:47:42.159083 | orchestrator | 2026-02-20 01:47:42.159089 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-20 01:47:42.869339 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-20 01:47:42.869421 | orchestrator | changed: [testbed-manager] 2026-02-20 01:47:42.869431 | orchestrator | 2026-02-20 01:47:42.869439 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-20 01:47:43.576838 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-20 01:47:43.576991 | orchestrator | changed: [testbed-manager] 2026-02-20 01:47:43.577010 | orchestrator | 2026-02-20 01:47:43.577022 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-20 01:47:43.639480 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:47:43.639574 | orchestrator | 2026-02-20 01:47:43.639588 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-20 01:47:44.039078 | orchestrator | ok: [testbed-manager] 2026-02-20 01:47:44.039232 | orchestrator | 2026-02-20 01:47:44.039262 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-20 01:47:44.118888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-20 01:47:44.119015 | orchestrator | 2026-02-20 01:47:44.119032 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-20 01:47:45.406471 | orchestrator | changed: [testbed-manager] 2026-02-20 01:47:45.406583 | orchestrator | 2026-02-20 01:47:45.406605 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-20 01:47:46.307188 | orchestrator | changed: [testbed-manager] 2026-02-20 01:47:46.307275 | orchestrator | 2026-02-20 01:47:46.307285 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-20 01:48:00.457230 | orchestrator | changed: [testbed-manager] 2026-02-20 01:48:00.457326 | orchestrator | 2026-02-20 01:48:00.457340 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-20 01:48:00.523915 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:48:00.524033 | orchestrator | 2026-02-20 01:48:00.524072 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-20 01:48:00.524084 | orchestrator | 2026-02-20 01:48:00.524094 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-20 01:48:02.507392 | orchestrator | ok: [testbed-manager] 2026-02-20 01:48:02.507479 | orchestrator | 2026-02-20 01:48:02.507502 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-20 01:48:02.641330 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-20 01:48:02.641426 | orchestrator | 2026-02-20 01:48:02.641440 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-20 01:48:02.712543 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-20 01:48:02.712632 | orchestrator | 2026-02-20 01:48:02.712645 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-20 01:48:05.529440 | orchestrator | ok: [testbed-manager] 2026-02-20 01:48:05.529515 | orchestrator | 2026-02-20 01:48:05.529522 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-20 01:48:05.577829 | orchestrator | ok: [testbed-manager] 2026-02-20 01:48:05.578001 | orchestrator | 2026-02-20 01:48:05.578073 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-20 01:48:05.715440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-20 01:48:05.715544 | orchestrator | 2026-02-20 01:48:05.715572 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-20 01:48:08.754627 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-20 01:48:08.754750 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-20 01:48:08.754765 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-20 01:48:08.754776 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-20 01:48:08.754787 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-20 01:48:08.754798 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-20 01:48:08.754807 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-20 01:48:08.754817 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-20 01:48:08.754828 | orchestrator | 2026-02-20 01:48:08.754838 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-20 01:48:09.482495 | orchestrator | changed: [testbed-manager] 2026-02-20 01:48:09.482592 | orchestrator | 2026-02-20 01:48:09.482608 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-20 01:48:10.180657 | orchestrator | changed: [testbed-manager] 2026-02-20 01:48:10.180746 | orchestrator | 2026-02-20 01:48:10.180755 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-20 01:48:10.265004 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-20 01:48:10.265078 | orchestrator | 2026-02-20 01:48:10.265087 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-20 01:48:11.569851 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-20 01:48:11.570006 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-20 01:48:11.570089 | orchestrator | 2026-02-20 01:48:11.570103 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-20 01:48:12.257644 | orchestrator | changed: [testbed-manager] 2026-02-20 01:48:12.257764 | orchestrator | 2026-02-20 01:48:12.257782 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-20 01:48:12.306803 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:48:12.306909 | orchestrator | 2026-02-20 01:48:12.306925 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-20 01:48:12.386509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-20 01:48:12.386621 | orchestrator | 2026-02-20 01:48:12.386639 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-20 01:48:13.025429 | orchestrator | changed: [testbed-manager] 2026-02-20 01:48:13.025547 | orchestrator | 2026-02-20 01:48:13.025568 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-20 01:48:13.097979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-20 01:48:13.098129 | orchestrator | 2026-02-20 01:48:13.098148 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-20 01:48:14.569685 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-20 01:48:14.569794 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-20 01:48:14.569809 | orchestrator | changed: [testbed-manager] 2026-02-20 01:48:14.569819 | orchestrator | 2026-02-20 01:48:14.569828 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-20 01:48:15.251437 | orchestrator | changed: [testbed-manager] 2026-02-20 01:48:15.251519 | orchestrator | 2026-02-20 01:48:15.251527 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-20 01:48:15.302094 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:48:15.302206 | orchestrator | 2026-02-20 01:48:15.302224 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-20 01:48:15.387210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-20 01:48:15.387317 | orchestrator | 2026-02-20 01:48:15.387334 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-20 01:48:15.951656 | orchestrator | changed: [testbed-manager] 2026-02-20 01:48:15.951755 | orchestrator | 2026-02-20 01:48:15.951769 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-20 01:48:16.379444 | orchestrator | changed: [testbed-manager] 2026-02-20 01:48:16.379521 | orchestrator | 2026-02-20 01:48:16.379528 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-20 01:48:17.732538 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-20 01:48:17.732638 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-20 01:48:17.732651 | orchestrator | 2026-02-20 01:48:17.732663 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-20 01:48:18.440889 | orchestrator | changed: [testbed-manager] 2026-02-20 01:48:18.441043 | orchestrator | 2026-02-20 01:48:18.441061 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-20 01:48:18.813766 | orchestrator | ok: [testbed-manager] 2026-02-20 01:48:18.813859 | orchestrator | 2026-02-20 01:48:18.813872 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-20 01:48:19.163635 | orchestrator | changed: [testbed-manager] 2026-02-20 01:48:19.163740 | orchestrator | 2026-02-20 01:48:19.163756 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-20 01:48:19.216779 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:48:19.216874 | orchestrator | 2026-02-20 01:48:19.216888 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-20 01:48:19.304860 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-20 01:48:19.305025 | orchestrator | 2026-02-20 01:48:19.305042 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-20 01:48:19.354117 | orchestrator | ok: [testbed-manager] 2026-02-20 01:48:19.354228 | orchestrator | 2026-02-20 01:48:19.354249 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-20 01:48:21.311230 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-20 01:48:21.311353 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-20 01:48:21.311378 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-20 01:48:21.311396 | orchestrator | 2026-02-20 01:48:21.311414 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-20 01:48:22.072024 | orchestrator | changed: [testbed-manager] 2026-02-20 01:48:22.072104 | orchestrator | 2026-02-20 01:48:22.072114 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-20 01:48:22.841092 | orchestrator | changed: [testbed-manager] 2026-02-20 01:48:22.841248 | orchestrator | 2026-02-20 01:48:22.841277 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-20 01:48:23.636912 | orchestrator | changed: [testbed-manager] 2026-02-20 01:48:23.637031 | orchestrator | 2026-02-20 01:48:23.637047 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-20 01:48:23.718890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-20 01:48:23.719049 | orchestrator | 2026-02-20 01:48:23.719070 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-20 01:48:23.771874 | orchestrator | ok: [testbed-manager] 2026-02-20 01:48:23.772013 | orchestrator | 2026-02-20 01:48:23.772032 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-20 01:48:24.541387 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-20 01:48:24.541513 | orchestrator | 2026-02-20 01:48:24.541539 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-20 01:48:24.640341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-20 01:48:24.640474 | orchestrator | 2026-02-20 01:48:24.640493 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-20 01:48:25.410887 | orchestrator | changed: [testbed-manager] 2026-02-20 01:48:25.411061 | orchestrator | 2026-02-20 01:48:25.411085 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-20 01:48:26.076724 | orchestrator | ok: [testbed-manager] 2026-02-20 01:48:26.076849 | orchestrator | 2026-02-20 01:48:26.076866 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-20 01:48:26.137877 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:48:26.138104 | orchestrator | 2026-02-20 01:48:26.138125 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-20 01:48:26.205083 | orchestrator | ok: [testbed-manager] 2026-02-20 01:48:26.205156 | orchestrator | 2026-02-20 01:48:26.205163 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-20 01:48:27.084097 | orchestrator | changed: [testbed-manager] 2026-02-20 01:48:27.084202 | orchestrator | 2026-02-20 01:48:27.084219 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-20 01:49:44.172251 | orchestrator | changed: [testbed-manager] 2026-02-20 01:49:44.172347 | orchestrator | 2026-02-20 01:49:44.172360 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-20 01:49:45.266327 | orchestrator | ok: [testbed-manager] 2026-02-20 01:49:45.266502 | orchestrator | 2026-02-20 01:49:45.266520 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-20 01:49:45.330274 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:49:45.330426 | orchestrator | 2026-02-20 01:49:45.330451 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-20 01:49:51.810306 | orchestrator | changed: [testbed-manager] 2026-02-20 01:49:51.810414 | orchestrator | 2026-02-20 01:49:51.810428 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-20 01:49:51.862330 | orchestrator | ok: [testbed-manager] 2026-02-20 01:49:51.862468 | orchestrator | 2026-02-20 01:49:51.862487 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-20 01:49:51.862500 | orchestrator | 2026-02-20 01:49:51.862511 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-20 01:49:52.013949 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:49:52.014118 | orchestrator | 2026-02-20 01:49:52.014139 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-20 01:50:52.071389 | orchestrator | Pausing for 60 seconds 2026-02-20 01:50:52.071485 | orchestrator | changed: [testbed-manager] 2026-02-20 01:50:52.071507 | orchestrator | 2026-02-20 01:50:52.071523 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-20 01:50:55.256456 | orchestrator | changed: [testbed-manager] 2026-02-20 01:50:55.256541 | orchestrator | 2026-02-20 01:50:55.256554 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-20 01:51:57.487557 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-20 01:51:57.487636 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-20 01:51:57.487660 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-20 01:51:57.487666 | orchestrator | changed: [testbed-manager] 2026-02-20 01:51:57.487672 | orchestrator | 2026-02-20 01:51:57.487678 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-20 01:52:10.660089 | orchestrator | changed: [testbed-manager] 2026-02-20 01:52:10.660267 | orchestrator | 2026-02-20 01:52:10.660292 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-20 01:52:10.750899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-20 01:52:10.750984 | orchestrator | 2026-02-20 01:52:10.750991 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-20 01:52:10.750997 | orchestrator | 2026-02-20 01:52:10.751001 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-20 01:52:10.798685 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:52:10.798774 | orchestrator | 2026-02-20 01:52:10.798787 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-20 01:52:10.873784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-20 01:52:10.873885 | orchestrator | 2026-02-20 01:52:10.873908 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-20 01:52:11.707704 | orchestrator | changed: [testbed-manager] 2026-02-20 01:52:11.707795 | orchestrator | 2026-02-20 01:52:11.707811 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-20 01:52:14.917137 | orchestrator | ok: [testbed-manager] 2026-02-20 01:52:14.917268 | orchestrator | 2026-02-20 01:52:14.917313 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-20 01:52:15.003916 | orchestrator | ok: [testbed-manager] => { 2026-02-20 01:52:15.004050 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-20 01:52:15.004078 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-20 01:52:15.004148 | orchestrator | "Checking running containers against expected versions...", 2026-02-20 01:52:15.004170 | orchestrator | "", 2026-02-20 01:52:15.004263 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-20 01:52:15.004286 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-20 01:52:15.004307 | orchestrator | " Enabled: true", 2026-02-20 01:52:15.004327 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-20 01:52:15.004345 | orchestrator | " Status: ✅ MATCH", 2026-02-20 01:52:15.004365 | orchestrator | "", 2026-02-20 01:52:15.004384 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-20 01:52:15.004444 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-20 01:52:15.004465 | orchestrator | " Enabled: true", 2026-02-20 01:52:15.004484 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-20 01:52:15.004503 | orchestrator | " Status: ✅ MATCH", 2026-02-20 01:52:15.004524 | orchestrator | "", 2026-02-20 01:52:15.004543 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-20 01:52:15.004562 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-20 01:52:15.004581 | orchestrator | " Enabled: true", 2026-02-20 01:52:15.004600 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-20 01:52:15.004619 | orchestrator | " Status: ✅ MATCH", 2026-02-20 01:52:15.004638 | orchestrator | "", 2026-02-20 01:52:15.004656 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-20 01:52:15.004676 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-20 01:52:15.004694 | orchestrator | " Enabled: true", 2026-02-20 01:52:15.004713 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-20 01:52:15.004732 | orchestrator | " Status: ✅ MATCH", 2026-02-20 01:52:15.004750 | orchestrator | "", 2026-02-20 01:52:15.004773 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-20 01:52:15.004791 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-20 01:52:15.004810 | orchestrator | " Enabled: true", 2026-02-20 01:52:15.004830 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-20 01:52:15.004848 | orchestrator | " Status: ✅ MATCH", 2026-02-20 01:52:15.004867 | orchestrator | "", 2026-02-20 01:52:15.004886 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-20 01:52:15.004905 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-20 01:52:15.004922 | orchestrator | " Enabled: true", 2026-02-20 01:52:15.004942 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-20 01:52:15.004961 | orchestrator | " Status: ✅ MATCH", 2026-02-20 01:52:15.004980 | orchestrator | "", 2026-02-20 01:52:15.004998 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-20 01:52:15.005018 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-20 01:52:15.005037 | orchestrator | " Enabled: true", 2026-02-20 01:52:15.005055 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-20 01:52:15.005073 | orchestrator | " Status: ✅ MATCH", 2026-02-20 01:52:15.005091 | orchestrator | "", 2026-02-20 01:52:15.005109 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-20 01:52:15.005128 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-20 01:52:15.005146 | orchestrator | " Enabled: true", 2026-02-20 01:52:15.005165 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-20 01:52:15.005183 | orchestrator | " Status: ✅ MATCH", 2026-02-20 01:52:15.005233 | orchestrator | "", 2026-02-20 01:52:15.005252 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-20 01:52:15.005271 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-20 01:52:15.005289 | orchestrator | " Enabled: true", 2026-02-20 01:52:15.005309 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-20 01:52:15.005327 | orchestrator | " Status: ✅ MATCH", 2026-02-20 01:52:15.005348 | orchestrator | "", 2026-02-20 01:52:15.005367 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-20 01:52:15.005386 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-20 01:52:15.005404 | orchestrator | " Enabled: true", 2026-02-20 01:52:15.005423 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-20 01:52:15.005441 | orchestrator | " Status: ✅ MATCH", 2026-02-20 01:52:15.005460 | orchestrator | "", 2026-02-20 01:52:15.005479 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-20 01:52:15.005517 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-20 01:52:15.005536 | orchestrator | " Enabled: true", 2026-02-20 01:52:15.005555 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-20 01:52:15.005573 | orchestrator | " Status: ✅ MATCH", 2026-02-20 01:52:15.005592 | orchestrator | "", 2026-02-20 01:52:15.005611 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-20 01:52:15.005630 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-20 01:52:15.005649 | orchestrator | " Enabled: true", 2026-02-20 01:52:15.005668 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-20 01:52:15.005688 | orchestrator | " Status: ✅ MATCH", 2026-02-20 01:52:15.005706 | orchestrator | "", 2026-02-20 01:52:15.005724 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-20 01:52:15.005744 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-20 01:52:15.005762 | orchestrator | " Enabled: true", 2026-02-20 01:52:15.005780 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-20 01:52:15.005798 | orchestrator | " Status: ✅ MATCH", 2026-02-20 01:52:15.005816 | orchestrator | "", 2026-02-20 01:52:15.005833 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-20 01:52:15.005852 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-20 01:52:15.005869 | orchestrator | " Enabled: true", 2026-02-20 01:52:15.005886 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-20 01:52:15.005937 | orchestrator | " Status: ✅ MATCH", 2026-02-20 01:52:15.005958 | orchestrator | "", 2026-02-20 01:52:15.005977 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-20 01:52:15.005995 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-20 01:52:15.006100 | orchestrator | " Enabled: true", 2026-02-20 01:52:15.006125 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-20 01:52:15.006144 | orchestrator | " Status: ✅ MATCH", 2026-02-20 01:52:15.006164 | orchestrator | "", 2026-02-20 01:52:15.006185 | orchestrator | "=== Summary ===", 2026-02-20 01:52:15.006234 | orchestrator | "Errors (version mismatches): 0", 2026-02-20 01:52:15.006252 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-20 01:52:15.006270 | orchestrator | "", 2026-02-20 01:52:15.006287 | orchestrator | "✅ All running containers match expected versions!" 2026-02-20 01:52:15.006305 | orchestrator | ] 2026-02-20 01:52:15.006323 | orchestrator | } 2026-02-20 01:52:15.006344 | orchestrator | 2026-02-20 01:52:15.006363 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-20 01:52:15.054743 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:52:15.054838 | orchestrator | 2026-02-20 01:52:15.054851 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 01:52:15.054864 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-20 01:52:15.054874 | orchestrator | 2026-02-20 01:52:15.189787 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-20 01:52:15.189894 | orchestrator | + deactivate 2026-02-20 01:52:15.189913 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-20 01:52:15.189931 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-20 01:52:15.189947 | orchestrator | + export PATH 2026-02-20 01:52:15.189962 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-20 01:52:15.189977 | orchestrator | + '[' -n '' ']' 2026-02-20 01:52:15.189991 | orchestrator | + hash -r 2026-02-20 01:52:15.190005 | orchestrator | + '[' -n '' ']' 2026-02-20 01:52:15.190065 | orchestrator | + unset VIRTUAL_ENV 2026-02-20 01:52:15.190081 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-20 01:52:15.190096 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-20 01:52:15.190111 | orchestrator | + unset -f deactivate 2026-02-20 01:52:15.190127 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-20 01:52:15.197740 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-20 01:52:15.197806 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-20 01:52:15.197829 | orchestrator | + local max_attempts=60 2026-02-20 01:52:15.197835 | orchestrator | + local name=ceph-ansible 2026-02-20 01:52:15.197840 | orchestrator | + local attempt_num=1 2026-02-20 01:52:15.198881 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-20 01:52:15.236695 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-20 01:52:15.236789 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-20 01:52:15.236803 | orchestrator | + local max_attempts=60 2026-02-20 01:52:15.236816 | orchestrator | + local name=kolla-ansible 2026-02-20 01:52:15.236827 | orchestrator | + local attempt_num=1 2026-02-20 01:52:15.237440 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-20 01:52:15.274008 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-20 01:52:15.274116 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-20 01:52:15.274126 | orchestrator | + local max_attempts=60 2026-02-20 01:52:15.274133 | orchestrator | + local name=osism-ansible 2026-02-20 01:52:15.274138 | orchestrator | + local attempt_num=1 2026-02-20 01:52:15.274236 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-20 01:52:15.307294 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-20 01:52:15.307375 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-20 01:52:15.307387 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-20 01:52:15.933696 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-20 01:52:16.098576 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-20 01:52:16.098679 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-20 01:52:16.098691 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-20 01:52:16.098698 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-20 01:52:16.098707 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-02-20 01:52:16.098732 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-20 01:52:16.098738 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-20 01:52:16.098744 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-20 01:52:16.098751 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-20 01:52:16.098757 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-20 01:52:16.098764 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-20 01:52:16.098770 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-20 01:52:16.098776 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-20 01:52:16.098800 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-20 01:52:16.098807 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-20 01:52:16.098814 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-20 01:52:16.107609 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-20 01:52:16.162541 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-20 01:52:16.162654 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-20 01:52:16.165918 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-20 01:52:28.684742 | orchestrator | 2026-02-20 01:52:28 | INFO  | Task 64f18b37-358a-4618-a438-87237b37039e (resolvconf) was prepared for execution. 2026-02-20 01:52:28.684845 | orchestrator | 2026-02-20 01:52:28 | INFO  | It takes a moment until task 64f18b37-358a-4618-a438-87237b37039e (resolvconf) has been started and output is visible here. 2026-02-20 01:52:44.618153 | orchestrator | 2026-02-20 01:52:44.618308 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-20 01:52:44.618334 | orchestrator | 2026-02-20 01:52:44.618345 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-20 01:52:44.618354 | orchestrator | Friday 20 February 2026 01:52:33 +0000 (0:00:00.166) 0:00:00.166 ******* 2026-02-20 01:52:44.618363 | orchestrator | ok: [testbed-manager] 2026-02-20 01:52:44.618373 | orchestrator | 2026-02-20 01:52:44.618382 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-20 01:52:44.618392 | orchestrator | Friday 20 February 2026 01:52:37 +0000 (0:00:04.000) 0:00:04.166 ******* 2026-02-20 01:52:44.618401 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:52:44.618411 | orchestrator | 2026-02-20 01:52:44.618419 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-20 01:52:44.618428 | orchestrator | Friday 20 February 2026 01:52:37 +0000 (0:00:00.066) 0:00:04.233 ******* 2026-02-20 01:52:44.618437 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-20 01:52:44.618447 | orchestrator | 2026-02-20 01:52:44.618456 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-20 01:52:44.618464 | orchestrator | Friday 20 February 2026 01:52:37 +0000 (0:00:00.073) 0:00:04.306 ******* 2026-02-20 01:52:44.618489 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-20 01:52:44.618498 | orchestrator | 2026-02-20 01:52:44.618507 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-20 01:52:44.618515 | orchestrator | Friday 20 February 2026 01:52:37 +0000 (0:00:00.098) 0:00:04.405 ******* 2026-02-20 01:52:44.618524 | orchestrator | ok: [testbed-manager] 2026-02-20 01:52:44.618533 | orchestrator | 2026-02-20 01:52:44.618541 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-20 01:52:44.618550 | orchestrator | Friday 20 February 2026 01:52:39 +0000 (0:00:01.311) 0:00:05.717 ******* 2026-02-20 01:52:44.618558 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:52:44.618567 | orchestrator | 2026-02-20 01:52:44.618575 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-20 01:52:44.618584 | orchestrator | Friday 20 February 2026 01:52:39 +0000 (0:00:00.057) 0:00:05.774 ******* 2026-02-20 01:52:44.618617 | orchestrator | ok: [testbed-manager] 2026-02-20 01:52:44.618626 | orchestrator | 2026-02-20 01:52:44.618635 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-20 01:52:44.618643 | orchestrator | Friday 20 February 2026 01:52:39 +0000 (0:00:00.540) 0:00:06.315 ******* 2026-02-20 01:52:44.618652 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:52:44.618661 | orchestrator | 2026-02-20 01:52:44.618669 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-20 01:52:44.618679 | orchestrator | Friday 20 February 2026 01:52:39 +0000 (0:00:00.086) 0:00:06.402 ******* 2026-02-20 01:52:44.618688 | orchestrator | changed: [testbed-manager] 2026-02-20 01:52:44.618696 | orchestrator | 2026-02-20 01:52:44.618705 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-20 01:52:44.618714 | orchestrator | Friday 20 February 2026 01:52:40 +0000 (0:00:00.684) 0:00:07.087 ******* 2026-02-20 01:52:44.618722 | orchestrator | changed: [testbed-manager] 2026-02-20 01:52:44.618731 | orchestrator | 2026-02-20 01:52:44.618739 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-20 01:52:44.618748 | orchestrator | Friday 20 February 2026 01:52:41 +0000 (0:00:01.111) 0:00:08.198 ******* 2026-02-20 01:52:44.618757 | orchestrator | ok: [testbed-manager] 2026-02-20 01:52:44.618766 | orchestrator | 2026-02-20 01:52:44.618774 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-20 01:52:44.618783 | orchestrator | Friday 20 February 2026 01:52:42 +0000 (0:00:01.130) 0:00:09.328 ******* 2026-02-20 01:52:44.618791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-20 01:52:44.618800 | orchestrator | 2026-02-20 01:52:44.618809 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-20 01:52:44.618817 | orchestrator | Friday 20 February 2026 01:52:42 +0000 (0:00:00.077) 0:00:09.406 ******* 2026-02-20 01:52:44.618825 | orchestrator | changed: [testbed-manager] 2026-02-20 01:52:44.618834 | orchestrator | 2026-02-20 01:52:44.618843 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 01:52:44.618854 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-20 01:52:44.618869 | orchestrator | 2026-02-20 01:52:44.618884 | orchestrator | 2026-02-20 01:52:44.618894 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 01:52:44.618902 | orchestrator | Friday 20 February 2026 01:52:44 +0000 (0:00:01.438) 0:00:10.845 ******* 2026-02-20 01:52:44.618910 | orchestrator | =============================================================================== 2026-02-20 01:52:44.618919 | orchestrator | Gathering Facts --------------------------------------------------------- 4.00s 2026-02-20 01:52:44.618927 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.44s 2026-02-20 01:52:44.618936 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.31s 2026-02-20 01:52:44.618944 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.13s 2026-02-20 01:52:44.618953 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.11s 2026-02-20 01:52:44.618962 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.68s 2026-02-20 01:52:44.618986 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.54s 2026-02-20 01:52:44.618995 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.10s 2026-02-20 01:52:44.619004 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-02-20 01:52:44.619012 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-02-20 01:52:44.619021 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-02-20 01:52:44.619029 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-02-20 01:52:44.619048 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-02-20 01:52:45.004332 | orchestrator | + osism apply sshconfig 2026-02-20 01:52:57.435842 | orchestrator | 2026-02-20 01:52:57 | INFO  | Task fb887edc-2123-4aab-b7b2-f98b0e8a0ff9 (sshconfig) was prepared for execution. 2026-02-20 01:52:57.435919 | orchestrator | 2026-02-20 01:52:57 | INFO  | It takes a moment until task fb887edc-2123-4aab-b7b2-f98b0e8a0ff9 (sshconfig) has been started and output is visible here. 2026-02-20 01:53:11.036383 | orchestrator | 2026-02-20 01:53:11.036522 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-20 01:53:11.036551 | orchestrator | 2026-02-20 01:53:11.036593 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-20 01:53:11.036613 | orchestrator | Friday 20 February 2026 01:53:02 +0000 (0:00:00.186) 0:00:00.186 ******* 2026-02-20 01:53:11.036631 | orchestrator | ok: [testbed-manager] 2026-02-20 01:53:11.036650 | orchestrator | 2026-02-20 01:53:11.036667 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-20 01:53:11.036685 | orchestrator | Friday 20 February 2026 01:53:03 +0000 (0:00:00.598) 0:00:00.785 ******* 2026-02-20 01:53:11.036702 | orchestrator | changed: [testbed-manager] 2026-02-20 01:53:11.036720 | orchestrator | 2026-02-20 01:53:11.036737 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-20 01:53:11.036753 | orchestrator | Friday 20 February 2026 01:53:03 +0000 (0:00:00.609) 0:00:01.394 ******* 2026-02-20 01:53:11.036772 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-20 01:53:11.036790 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-20 01:53:11.036809 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-20 01:53:11.036827 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-20 01:53:11.036844 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-20 01:53:11.036862 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-20 01:53:11.036879 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-20 01:53:11.036898 | orchestrator | 2026-02-20 01:53:11.036915 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-20 01:53:11.036932 | orchestrator | Friday 20 February 2026 01:53:10 +0000 (0:00:06.135) 0:00:07.530 ******* 2026-02-20 01:53:11.036949 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:53:11.036968 | orchestrator | 2026-02-20 01:53:11.036985 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-20 01:53:11.037003 | orchestrator | Friday 20 February 2026 01:53:10 +0000 (0:00:00.088) 0:00:07.618 ******* 2026-02-20 01:53:11.037021 | orchestrator | changed: [testbed-manager] 2026-02-20 01:53:11.037040 | orchestrator | 2026-02-20 01:53:11.037057 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 01:53:11.037078 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 01:53:11.037097 | orchestrator | 2026-02-20 01:53:11.037115 | orchestrator | 2026-02-20 01:53:11.037132 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 01:53:11.037149 | orchestrator | Friday 20 February 2026 01:53:10 +0000 (0:00:00.598) 0:00:08.217 ******* 2026-02-20 01:53:11.037167 | orchestrator | =============================================================================== 2026-02-20 01:53:11.037185 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.14s 2026-02-20 01:53:11.037203 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.61s 2026-02-20 01:53:11.037222 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2026-02-20 01:53:11.037323 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.60s 2026-02-20 01:53:11.037384 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-02-20 01:53:11.497512 | orchestrator | + osism apply known-hosts 2026-02-20 01:53:23.905891 | orchestrator | 2026-02-20 01:53:23 | INFO  | Task a8487284-7ae9-4ca5-b0f6-f55d528faca9 (known-hosts) was prepared for execution. 2026-02-20 01:53:23.905989 | orchestrator | 2026-02-20 01:53:23 | INFO  | It takes a moment until task a8487284-7ae9-4ca5-b0f6-f55d528faca9 (known-hosts) has been started and output is visible here. 2026-02-20 01:53:42.781258 | orchestrator | 2026-02-20 01:53:42.781418 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-20 01:53:42.781443 | orchestrator | 2026-02-20 01:53:42.781459 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-20 01:53:42.781475 | orchestrator | Friday 20 February 2026 01:53:29 +0000 (0:00:00.187) 0:00:00.187 ******* 2026-02-20 01:53:42.781489 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-20 01:53:42.781503 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-20 01:53:42.781516 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-20 01:53:42.781529 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-20 01:53:42.781542 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-20 01:53:42.781555 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-20 01:53:42.781570 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-20 01:53:42.781584 | orchestrator | 2026-02-20 01:53:42.781599 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-20 01:53:42.781615 | orchestrator | Friday 20 February 2026 01:53:35 +0000 (0:00:06.315) 0:00:06.503 ******* 2026-02-20 01:53:42.781632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-20 01:53:42.781650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-20 01:53:42.781663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-20 01:53:42.781672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-20 01:53:42.781681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-20 01:53:42.781700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-20 01:53:42.781709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-20 01:53:42.781718 | orchestrator | 2026-02-20 01:53:42.781727 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-20 01:53:42.781735 | orchestrator | Friday 20 February 2026 01:53:35 +0000 (0:00:00.165) 0:00:06.668 ******* 2026-02-20 01:53:42.781745 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCrlgs978ZHOsv4ceEr7mohsfpfiqJgUu84h5w89bIo9992550ANY0N8b41LIXR30Wxm2VGKXXfoxv4yx5IlmWE=) 2026-02-20 01:53:42.781763 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDyIECtv/nEk7ZVX8GtpuRfSbeDs7Pv4I7DY3IcMqVhrZlTo0BrIKc1KBizQQzYkJBR8k0Gnp4zA+bpt9qR7+XXquRJSpTbKu9z8V3KauQhAi1/33HUOSAzroy8JolgZHsAlzz+QQA4WmyMIwbn4Bk/Tn5kQuUG+RpFEBuXUJDQU4FZKDEcZDc5eYZTt//SbXHC9pZS85SiGDpLy88UNrU8+CXA32dvIZX+ofvUuJlSKnQ5rwSzxUWEIO9ddR6A2F1KuF5+PtXb1TcXIJE3vWHaXjxLSqXGqrxVvElWlAKgyURq+jYc5qDVc+RQroI9bYKmIm1u1E0meeZ0X+fQeMBb5/T/seJ1+1GVRO5UKauEqhIiRWvlQXQjLOwRVJ6q1IC4RFYdzWLwae/W7NTQsUfODfAg8EukxagwtXdTkbOcngiZPIfImMCqxr7GDTUkuAtk5W0lJlfOAEwFUVLk9C8Kpa5MPkCJ3EtpJ3eSwbWrcjW4vXZAsoMo1Wbh5tv2zGE=) 2026-02-20 01:53:42.781796 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMlxxxiYm3e3D8dBerMxqHos+EfIe9O5+7f3GtNpc92T) 2026-02-20 01:53:42.781807 | orchestrator | 2026-02-20 01:53:42.781817 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-20 01:53:42.781828 | orchestrator | Friday 20 February 2026 01:53:36 +0000 (0:00:01.255) 0:00:07.923 ******* 2026-02-20 01:53:42.781838 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFyJxOLIia7jZzQxUfnAvsGP4w+Uprw0pvHET4HS0uYOqhvMpVloa6SdT55N2RJmgtCAxTcUq9NxvqQFqGOhPls=) 2026-02-20 01:53:42.781874 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdhIxjZA9q89XSTlzgb2Nrf6R0AeCwWWahWmWIlDV6t14AHtWb8vz4L+J5UjXkO8c3V3JdoQDfS4WNm2pTTyP7ahjRiFgXNcTvRI15QzdBn4ecLBaDDYWzkbURbapqbhhMcgmX23CfIeVP77m8irRD0v+Mw0agq09w8BY0HjzQ3h/LFvXwzJad7GpjsQ/V5W+m56qwmPfEvZB4oK5Kkwq2TRyYrpiYs3EH+wl98r8BTjljpXdScVsDwosAYuGSuzu+fUSiwnfDBRrYfVO6rElHbFLWfKawfyOV50+sKEpGFqQH+kMqltDdtJ/WSkubEdJK9Dt7Nkf37OrLYd0951buzrBIUaFJaZzoP7c6uKTeHvIUuzA2StIoMoTctAjiLnKZKox5EGpbxngP5z0y+bT8uEPzlwLZ/KRgp5+ZkCz1fzdKDtEL/vlWkLO3zgVjPC4HKvVJx+tceGVfWV1w7vjpa2OWGc7ahZFA0pzswKqcDC2aoMBJj3YQpjxCE1hNxDM=) 2026-02-20 01:53:42.781885 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPOlw12Y6lR+oVjmqbZ8UrNeOXHrwRQfz9RfYn6Vk7vV) 2026-02-20 01:53:42.781894 | orchestrator | 2026-02-20 01:53:42.781902 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-20 01:53:42.781911 | orchestrator | Friday 20 February 2026 01:53:37 +0000 (0:00:01.177) 0:00:09.100 ******* 2026-02-20 01:53:42.781920 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC999Xhw74EuMyho3BjXVow6oYoAprkxR22kx/uJvu2AHnPTCTX+1JrKa1gDdrKsLPqu5OxcDVH28zNsaqT+K5Xfq1Uv2eJYsU1u5WcuRJK35+EmJPEiQ3JQDAglmLko2Gloqw+siQWDdLnrk0L+8nAChJgzswg9zEbOKyxG0aOY84EOwPL/owK4RsF8W+I4xl3xOOYNbYLr+QB+/95ZtLf0U6ZrnYELrKObtsPmgDaZFtExWBUWiadzoMYfVRaNdDLWtoyStLoVvP1rt9eFv8OhwFfg1xh929SsWBTMEhBFgUckyIPT5GuAVR0NyG6DZjx78mEY2F92QJFthBxQ2Uzp7+ei0LjRsPXmspTJkZoNUJAuoZ06KCucU2IGrLARLXLG4OyL1PcDnMFAkFdhuU2dvCMl+i9xWtSTz3borjbXaPOhNcFH5Vw14IoCH/U3r9dff0PkTJLTIiXTbl2bG+RUGku5s4PCiIU8zRgNr+HYH2138Ht127Kb0hoWgPObbc=) 2026-02-20 01:53:42.781929 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPGBc2ODrhMFCaAbUb5Fx8dGZ+J4hxMyHS4I0J/RPPGb) 2026-02-20 01:53:42.781938 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEQ40cUviWrQLHR2payNhQQkXKSOwlf7nonhhuFqk8O1jUe6wwvMgFMo7p+HFQ2k1xj2jRsVyrOIZBYaykqxbO0=) 2026-02-20 01:53:42.781947 | orchestrator | 2026-02-20 01:53:42.781956 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-20 01:53:42.781964 | orchestrator | Friday 20 February 2026 01:53:39 +0000 (0:00:01.166) 0:00:10.267 ******* 2026-02-20 01:53:42.781973 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDoTkb7UN//d7menvDjYO79LrVrIrMvERrj0buxWDQqDgffI1pRsHwfQDPeeTG+emI44CMuxzKkDhZToehmxm22ozafFQV0CBAsjRN4LbXrVwptD//I4bkGD5C9scgWC9cumkIQFoyGWwJDuKLHa01vrtedygDwAz0KRpPX5fSfW6VWYZhu4VGwel1K7Lb1QYrL1o401ap+WuEUpFo79o4PyfDGQvgAo+xONRNlzD7C286kL9/yiiTyT+z6oCAxbBcg1dAsvykQmRZ7JR/Y7wq34fIr1xrrcnE6noBfVlUD84b2DgzLrB+8FZomKEMOs71wwo+T07ztQ11tzNjpW0EhGc1oieet8hE1aucvuOLPAgP54SjF4fzEEoZyzHRid6V3ZULvMydRtCxMcw+lPKKIdSK85CYo1c6hByvQESBnB40JDVLyRWUP/XujPY39ESfZethXZsX21NjNs7/jz5OlIHlzBCQy4cWWd/XOv1La4hO+vlFZt/vEVbxR6not84U=) 2026-02-20 01:53:42.782082 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMWlPIBcttkcg1rluXlf+LT81A70c6p8WOKODtrXhCtR) 2026-02-20 01:53:42.782093 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH7A5NNCrn3/s3i+LiiopdgLzfT09r6Tv0uW8MB3B00/TJC2f8adPkJSqg3mONbys6IT1LgfRTx7zqUPsAMS+kk=) 2026-02-20 01:53:42.782102 | orchestrator | 2026-02-20 01:53:42.782111 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-20 01:53:42.782119 | orchestrator | Friday 20 February 2026 01:53:40 +0000 (0:00:01.249) 0:00:11.517 ******* 2026-02-20 01:53:42.782197 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFMcCWmKNMXB8jm6W6BZuzpjLS/MO1pEd5IElJTLEB6i039rp10h4UmfFJOamVl577Y7tI1KFemucA6nksJ6bc/1WVCv449Y/ZFJq3LmA2n77AuPf47eZYRoGHY/lAOQa8qwZYFo4oaMfJxAdFn7LumaTPJRTxibAiIhLyNGXmr2Hl7GyDSDtKYqztQLAwY8hYoz/cEFR+QyuCrLkxx01zN+DMMLwhAZ3AVlv7+OkMBWO+PxQh/6jBZq98GsHA3w/RemHWTDGBS8YOcdWnRxAdUgvk+Javs7nFM/XlEWiY0q27YmdjuIpJdyDwUSc8q/XdRl0VhN4W78gPcFYFywpLM3YUu7VnrXapWRTHbA6K70X5d7MktFZQSOQ3PfP8W8SpIXgLyAayYpn+VlHEiWdsMqII9lAggXYynRmd9OX5T6G8JezMBLpp68QdbVPlR6qTJwf2AG1RGsvrHZE4cF/rPo/pzeTiaVoYZnSa7GMnA0l7R3PpBLOfI4esa2wpXhc=) 2026-02-20 01:53:42.782207 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKKegRKihVJDkXV/Uw01MRgdo09xEeRkgfyIDVvW7wKQJ4VSgIJLowh9r51J0ZdCGEJuSb7YK8osj2x7G83iBqs=) 2026-02-20 01:53:42.782216 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILG7oG5OLb59T8ngtb8d/a+4BAHkEkWMpkM3Fjl5/h3B) 2026-02-20 01:53:42.782225 | orchestrator | 2026-02-20 01:53:42.782234 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-20 01:53:42.782242 | orchestrator | Friday 20 February 2026 01:53:41 +0000 (0:00:01.197) 0:00:12.715 ******* 2026-02-20 01:53:42.782287 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAd2oK8cRZZJYYTndqT+DzSaNXEPx9DMsVdeka7ot0Dd84PT05A4vzBnqmDVmFAsKMBeseLZA5n5YK0o/oTIJJA=) 2026-02-20 01:53:54.635346 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJmJSpfHb9CH1RKB8pExp7Iq8JdnK+RdZhpBiKYr8CvI) 2026-02-20 01:53:54.635445 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRvzpwk66h2WefFichRyt03k5g6F67aW7je+j6xUD10QJz1Ftr14qhb+ejDJzMBP9jEiOadg//1QwuLAjZOhP4TajV9jR5KwaMDjCaCPCd0uFD75DZQ6h9v81zeuJL9Aw6tU/H9b0dORBmMtZNwsjWyWq/SKhcHpRP81nScM0xysXOvoxTKVmKmoSabW8aDyaT027MTnrUu0KTSpTHK7sogjrhM6DrmgLWq+3SsU+P9S2IoUir6JVVDH3j/scRffb2Hd42gnPMF6ainih9TTBWSN0CRe6vStmRPjk1N6Rn7EFuvg15AsKUi3xpgUxeZZAgpSQvEPBk3YZXOg2eLmJc9eS4mnG69xwigny03N8znWwXg8LHjnS3VBjs+H5aIJboiS0QvpNkbAOrbT0EAFseyb+aMQ1SlzMaPfuftp2oiCTf9GwSFJpRX3+Okn4CevtfNaArYnDlka6HHrO1BYBRxwSRUjJBTS/3y4JQtxqN7KjuYFLuoJmwI5LaAqoEvvU=) 2026-02-20 01:53:54.635460 | orchestrator | 2026-02-20 01:53:54.635469 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-20 01:53:54.635479 | orchestrator | Friday 20 February 2026 01:53:42 +0000 (0:00:01.219) 0:00:13.934 ******* 2026-02-20 01:53:54.635488 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSyQgi6Fv3iAAOUKAqqyT0Z6vtlbKQsyPSuVkU0TgLtRGDUBVFcZDkPpNZRxc8Pf05Odhru6QUwJH1vjgGI9Lpbidg3QhPFd/8Itw1scHkoH9Fbr5pDSJ2l3ZIDqMVHFR7ReJiLNtmosuYqxm8moosOGcrBJZ/k6gDgii7P4dF9k23Rxc92ZzZX1r9gp2G7EBQTg3ZxUbPlNPm/V8QFRya+w2yd3M635UwmIGnDn3wXb67wDwSKAWXrGn4z3BudHc2Cm05xACCux+4g41IrV3mTm4wgFSI/WgMCq3Y7h43f2L2JgsgAp0y9kv3xPftBNWywOevg+C/xQflAtj7Xt7m6kIUb1INNmRq8gnDvshiRYTb27I0IO77nTyFPls+SIVZx+OGAkkKFnxhxFN9q4OGJrxOJ8/eS89n5EKH5HMlyThU5aPHQw1qD3MlH/7nO9c7bdpQd/xxFebPghYp5/ExnlRxxeaZgv6aglm+pE5eermBSaILu+o1ocoWKV9P6hs=) 2026-02-20 01:53:54.635497 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCKZaz4j34pgnJRCHf0jwRV3PLsuFuZ3QmIJ7tquGu/nX0w0dQ6RRtNMWB0sRmnzXjLJmKc9wFeYfjU2GTPDS9Y=) 2026-02-20 01:53:54.635525 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINHaqPta4jOSjX0aQ7blfB5yPbpYtRxRnRx2Fe32PzyP) 2026-02-20 01:53:54.635533 | orchestrator | 2026-02-20 01:53:54.635541 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-20 01:53:54.635550 | orchestrator | Friday 20 February 2026 01:53:43 +0000 (0:00:01.134) 0:00:15.069 ******* 2026-02-20 01:53:54.635558 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-20 01:53:54.635567 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-20 01:53:54.635574 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-20 01:53:54.635582 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-20 01:53:54.635590 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-20 01:53:54.635597 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-20 01:53:54.635605 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-20 01:53:54.635613 | orchestrator | 2026-02-20 01:53:54.635621 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-20 01:53:54.635630 | orchestrator | Friday 20 February 2026 01:53:49 +0000 (0:00:05.691) 0:00:20.761 ******* 2026-02-20 01:53:54.635639 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-20 01:53:54.635648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-20 01:53:54.635656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-20 01:53:54.635664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-20 01:53:54.635672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-20 01:53:54.635680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-20 01:53:54.635688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-20 01:53:54.635696 | orchestrator | 2026-02-20 01:53:54.635715 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-20 01:53:54.635724 | orchestrator | Friday 20 February 2026 01:53:49 +0000 (0:00:00.217) 0:00:20.978 ******* 2026-02-20 01:53:54.635735 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDyIECtv/nEk7ZVX8GtpuRfSbeDs7Pv4I7DY3IcMqVhrZlTo0BrIKc1KBizQQzYkJBR8k0Gnp4zA+bpt9qR7+XXquRJSpTbKu9z8V3KauQhAi1/33HUOSAzroy8JolgZHsAlzz+QQA4WmyMIwbn4Bk/Tn5kQuUG+RpFEBuXUJDQU4FZKDEcZDc5eYZTt//SbXHC9pZS85SiGDpLy88UNrU8+CXA32dvIZX+ofvUuJlSKnQ5rwSzxUWEIO9ddR6A2F1KuF5+PtXb1TcXIJE3vWHaXjxLSqXGqrxVvElWlAKgyURq+jYc5qDVc+RQroI9bYKmIm1u1E0meeZ0X+fQeMBb5/T/seJ1+1GVRO5UKauEqhIiRWvlQXQjLOwRVJ6q1IC4RFYdzWLwae/W7NTQsUfODfAg8EukxagwtXdTkbOcngiZPIfImMCqxr7GDTUkuAtk5W0lJlfOAEwFUVLk9C8Kpa5MPkCJ3EtpJ3eSwbWrcjW4vXZAsoMo1Wbh5tv2zGE=) 2026-02-20 01:53:54.635744 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCrlgs978ZHOsv4ceEr7mohsfpfiqJgUu84h5w89bIo9992550ANY0N8b41LIXR30Wxm2VGKXXfoxv4yx5IlmWE=) 2026-02-20 01:53:54.635765 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMlxxxiYm3e3D8dBerMxqHos+EfIe9O5+7f3GtNpc92T) 2026-02-20 01:53:54.635774 | orchestrator | 2026-02-20 01:53:54.635782 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-20 01:53:54.635790 | orchestrator | Friday 20 February 2026 01:53:51 +0000 (0:00:01.240) 0:00:22.218 ******* 2026-02-20 01:53:54.635798 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPOlw12Y6lR+oVjmqbZ8UrNeOXHrwRQfz9RfYn6Vk7vV) 2026-02-20 01:53:54.635809 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdhIxjZA9q89XSTlzgb2Nrf6R0AeCwWWahWmWIlDV6t14AHtWb8vz4L+J5UjXkO8c3V3JdoQDfS4WNm2pTTyP7ahjRiFgXNcTvRI15QzdBn4ecLBaDDYWzkbURbapqbhhMcgmX23CfIeVP77m8irRD0v+Mw0agq09w8BY0HjzQ3h/LFvXwzJad7GpjsQ/V5W+m56qwmPfEvZB4oK5Kkwq2TRyYrpiYs3EH+wl98r8BTjljpXdScVsDwosAYuGSuzu+fUSiwnfDBRrYfVO6rElHbFLWfKawfyOV50+sKEpGFqQH+kMqltDdtJ/WSkubEdJK9Dt7Nkf37OrLYd0951buzrBIUaFJaZzoP7c6uKTeHvIUuzA2StIoMoTctAjiLnKZKox5EGpbxngP5z0y+bT8uEPzlwLZ/KRgp5+ZkCz1fzdKDtEL/vlWkLO3zgVjPC4HKvVJx+tceGVfWV1w7vjpa2OWGc7ahZFA0pzswKqcDC2aoMBJj3YQpjxCE1hNxDM=) 2026-02-20 01:53:54.635818 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFyJxOLIia7jZzQxUfnAvsGP4w+Uprw0pvHET4HS0uYOqhvMpVloa6SdT55N2RJmgtCAxTcUq9NxvqQFqGOhPls=) 2026-02-20 01:53:54.635827 | orchestrator | 2026-02-20 01:53:54.635835 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-20 01:53:54.635845 | orchestrator | Friday 20 February 2026 01:53:52 +0000 (0:00:01.140) 0:00:23.359 ******* 2026-02-20 01:53:54.635854 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPGBc2ODrhMFCaAbUb5Fx8dGZ+J4hxMyHS4I0J/RPPGb) 2026-02-20 01:53:54.635864 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC999Xhw74EuMyho3BjXVow6oYoAprkxR22kx/uJvu2AHnPTCTX+1JrKa1gDdrKsLPqu5OxcDVH28zNsaqT+K5Xfq1Uv2eJYsU1u5WcuRJK35+EmJPEiQ3JQDAglmLko2Gloqw+siQWDdLnrk0L+8nAChJgzswg9zEbOKyxG0aOY84EOwPL/owK4RsF8W+I4xl3xOOYNbYLr+QB+/95ZtLf0U6ZrnYELrKObtsPmgDaZFtExWBUWiadzoMYfVRaNdDLWtoyStLoVvP1rt9eFv8OhwFfg1xh929SsWBTMEhBFgUckyIPT5GuAVR0NyG6DZjx78mEY2F92QJFthBxQ2Uzp7+ei0LjRsPXmspTJkZoNUJAuoZ06KCucU2IGrLARLXLG4OyL1PcDnMFAkFdhuU2dvCMl+i9xWtSTz3borjbXaPOhNcFH5Vw14IoCH/U3r9dff0PkTJLTIiXTbl2bG+RUGku5s4PCiIU8zRgNr+HYH2138Ht127Kb0hoWgPObbc=) 2026-02-20 01:53:54.635873 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEQ40cUviWrQLHR2payNhQQkXKSOwlf7nonhhuFqk8O1jUe6wwvMgFMo7p+HFQ2k1xj2jRsVyrOIZBYaykqxbO0=) 2026-02-20 01:53:54.635882 | orchestrator | 2026-02-20 01:53:54.635891 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-20 01:53:54.635900 | orchestrator | Friday 20 February 2026 01:53:53 +0000 (0:00:01.188) 0:00:24.547 ******* 2026-02-20 01:53:54.635908 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMWlPIBcttkcg1rluXlf+LT81A70c6p8WOKODtrXhCtR) 2026-02-20 01:53:54.635929 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDoTkb7UN//d7menvDjYO79LrVrIrMvERrj0buxWDQqDgffI1pRsHwfQDPeeTG+emI44CMuxzKkDhZToehmxm22ozafFQV0CBAsjRN4LbXrVwptD//I4bkGD5C9scgWC9cumkIQFoyGWwJDuKLHa01vrtedygDwAz0KRpPX5fSfW6VWYZhu4VGwel1K7Lb1QYrL1o401ap+WuEUpFo79o4PyfDGQvgAo+xONRNlzD7C286kL9/yiiTyT+z6oCAxbBcg1dAsvykQmRZ7JR/Y7wq34fIr1xrrcnE6noBfVlUD84b2DgzLrB+8FZomKEMOs71wwo+T07ztQ11tzNjpW0EhGc1oieet8hE1aucvuOLPAgP54SjF4fzEEoZyzHRid6V3ZULvMydRtCxMcw+lPKKIdSK85CYo1c6hByvQESBnB40JDVLyRWUP/XujPY39ESfZethXZsX21NjNs7/jz5OlIHlzBCQy4cWWd/XOv1La4hO+vlFZt/vEVbxR6not84U=) 2026-02-20 01:53:59.844542 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH7A5NNCrn3/s3i+LiiopdgLzfT09r6Tv0uW8MB3B00/TJC2f8adPkJSqg3mONbys6IT1LgfRTx7zqUPsAMS+kk=) 2026-02-20 01:53:59.844697 | orchestrator | 2026-02-20 01:53:59.844717 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-20 01:53:59.844730 | orchestrator | Friday 20 February 2026 01:53:54 +0000 (0:00:01.242) 0:00:25.789 ******* 2026-02-20 01:53:59.844744 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFMcCWmKNMXB8jm6W6BZuzpjLS/MO1pEd5IElJTLEB6i039rp10h4UmfFJOamVl577Y7tI1KFemucA6nksJ6bc/1WVCv449Y/ZFJq3LmA2n77AuPf47eZYRoGHY/lAOQa8qwZYFo4oaMfJxAdFn7LumaTPJRTxibAiIhLyNGXmr2Hl7GyDSDtKYqztQLAwY8hYoz/cEFR+QyuCrLkxx01zN+DMMLwhAZ3AVlv7+OkMBWO+PxQh/6jBZq98GsHA3w/RemHWTDGBS8YOcdWnRxAdUgvk+Javs7nFM/XlEWiY0q27YmdjuIpJdyDwUSc8q/XdRl0VhN4W78gPcFYFywpLM3YUu7VnrXapWRTHbA6K70X5d7MktFZQSOQ3PfP8W8SpIXgLyAayYpn+VlHEiWdsMqII9lAggXYynRmd9OX5T6G8JezMBLpp68QdbVPlR6qTJwf2AG1RGsvrHZE4cF/rPo/pzeTiaVoYZnSa7GMnA0l7R3PpBLOfI4esa2wpXhc=) 2026-02-20 01:53:59.844764 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKKegRKihVJDkXV/Uw01MRgdo09xEeRkgfyIDVvW7wKQJ4VSgIJLowh9r51J0ZdCGEJuSb7YK8osj2x7G83iBqs=) 2026-02-20 01:53:59.844783 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILG7oG5OLb59T8ngtb8d/a+4BAHkEkWMpkM3Fjl5/h3B) 2026-02-20 01:53:59.844803 | orchestrator | 2026-02-20 01:53:59.844821 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-20 01:53:59.844839 | orchestrator | Friday 20 February 2026 01:53:55 +0000 (0:00:01.230) 0:00:27.020 ******* 2026-02-20 01:53:59.844860 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRvzpwk66h2WefFichRyt03k5g6F67aW7je+j6xUD10QJz1Ftr14qhb+ejDJzMBP9jEiOadg//1QwuLAjZOhP4TajV9jR5KwaMDjCaCPCd0uFD75DZQ6h9v81zeuJL9Aw6tU/H9b0dORBmMtZNwsjWyWq/SKhcHpRP81nScM0xysXOvoxTKVmKmoSabW8aDyaT027MTnrUu0KTSpTHK7sogjrhM6DrmgLWq+3SsU+P9S2IoUir6JVVDH3j/scRffb2Hd42gnPMF6ainih9TTBWSN0CRe6vStmRPjk1N6Rn7EFuvg15AsKUi3xpgUxeZZAgpSQvEPBk3YZXOg2eLmJc9eS4mnG69xwigny03N8znWwXg8LHjnS3VBjs+H5aIJboiS0QvpNkbAOrbT0EAFseyb+aMQ1SlzMaPfuftp2oiCTf9GwSFJpRX3+Okn4CevtfNaArYnDlka6HHrO1BYBRxwSRUjJBTS/3y4JQtxqN7KjuYFLuoJmwI5LaAqoEvvU=) 2026-02-20 01:53:59.844872 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAd2oK8cRZZJYYTndqT+DzSaNXEPx9DMsVdeka7ot0Dd84PT05A4vzBnqmDVmFAsKMBeseLZA5n5YK0o/oTIJJA=) 2026-02-20 01:53:59.844883 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJmJSpfHb9CH1RKB8pExp7Iq8JdnK+RdZhpBiKYr8CvI) 2026-02-20 01:53:59.844894 | orchestrator | 2026-02-20 01:53:59.844923 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-20 01:53:59.844945 | orchestrator | Friday 20 February 2026 01:53:57 +0000 (0:00:01.159) 0:00:28.179 ******* 2026-02-20 01:53:59.844957 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCKZaz4j34pgnJRCHf0jwRV3PLsuFuZ3QmIJ7tquGu/nX0w0dQ6RRtNMWB0sRmnzXjLJmKc9wFeYfjU2GTPDS9Y=) 2026-02-20 01:53:59.844987 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSyQgi6Fv3iAAOUKAqqyT0Z6vtlbKQsyPSuVkU0TgLtRGDUBVFcZDkPpNZRxc8Pf05Odhru6QUwJH1vjgGI9Lpbidg3QhPFd/8Itw1scHkoH9Fbr5pDSJ2l3ZIDqMVHFR7ReJiLNtmosuYqxm8moosOGcrBJZ/k6gDgii7P4dF9k23Rxc92ZzZX1r9gp2G7EBQTg3ZxUbPlNPm/V8QFRya+w2yd3M635UwmIGnDn3wXb67wDwSKAWXrGn4z3BudHc2Cm05xACCux+4g41IrV3mTm4wgFSI/WgMCq3Y7h43f2L2JgsgAp0y9kv3xPftBNWywOevg+C/xQflAtj7Xt7m6kIUb1INNmRq8gnDvshiRYTb27I0IO77nTyFPls+SIVZx+OGAkkKFnxhxFN9q4OGJrxOJ8/eS89n5EKH5HMlyThU5aPHQw1qD3MlH/7nO9c7bdpQd/xxFebPghYp5/ExnlRxxeaZgv6aglm+pE5eermBSaILu+o1ocoWKV9P6hs=) 2026-02-20 01:53:59.844999 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINHaqPta4jOSjX0aQ7blfB5yPbpYtRxRnRx2Fe32PzyP) 2026-02-20 01:53:59.845010 | orchestrator | 2026-02-20 01:53:59.845021 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-20 01:53:59.845042 | orchestrator | Friday 20 February 2026 01:53:58 +0000 (0:00:01.327) 0:00:29.507 ******* 2026-02-20 01:53:59.845054 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-20 01:53:59.845065 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-20 01:53:59.845076 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-20 01:53:59.845089 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-20 01:53:59.845123 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-20 01:53:59.845136 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-20 01:53:59.845149 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-20 01:53:59.845162 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:53:59.845175 | orchestrator | 2026-02-20 01:53:59.845188 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-20 01:53:59.845200 | orchestrator | Friday 20 February 2026 01:53:58 +0000 (0:00:00.196) 0:00:29.704 ******* 2026-02-20 01:53:59.845213 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:53:59.845226 | orchestrator | 2026-02-20 01:53:59.845239 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-20 01:53:59.845251 | orchestrator | Friday 20 February 2026 01:53:58 +0000 (0:00:00.058) 0:00:29.762 ******* 2026-02-20 01:53:59.845263 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:53:59.845301 | orchestrator | 2026-02-20 01:53:59.845314 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-20 01:53:59.845326 | orchestrator | Friday 20 February 2026 01:53:58 +0000 (0:00:00.053) 0:00:29.815 ******* 2026-02-20 01:53:59.845338 | orchestrator | changed: [testbed-manager] 2026-02-20 01:53:59.845351 | orchestrator | 2026-02-20 01:53:59.845363 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 01:53:59.845375 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-20 01:53:59.845389 | orchestrator | 2026-02-20 01:53:59.845401 | orchestrator | 2026-02-20 01:53:59.845412 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 01:53:59.845424 | orchestrator | Friday 20 February 2026 01:53:59 +0000 (0:00:00.856) 0:00:30.671 ******* 2026-02-20 01:53:59.845442 | orchestrator | =============================================================================== 2026-02-20 01:53:59.845455 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.32s 2026-02-20 01:53:59.845467 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.69s 2026-02-20 01:53:59.845480 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.33s 2026-02-20 01:53:59.845491 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.26s 2026-02-20 01:53:59.845501 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.25s 2026-02-20 01:53:59.845512 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2026-02-20 01:53:59.845523 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2026-02-20 01:53:59.845533 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2026-02-20 01:53:59.845544 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-02-20 01:53:59.845554 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-02-20 01:53:59.845564 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-02-20 01:53:59.845575 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-02-20 01:53:59.845585 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-02-20 01:53:59.845596 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-02-20 01:53:59.845615 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-02-20 01:53:59.845626 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-02-20 01:53:59.845636 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.86s 2026-02-20 01:53:59.845647 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.22s 2026-02-20 01:53:59.845659 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.20s 2026-02-20 01:53:59.845670 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-02-20 01:54:00.267053 | orchestrator | + osism apply squid 2026-02-20 01:54:12.812779 | orchestrator | 2026-02-20 01:54:12 | INFO  | Task 9c5988c2-3983-4297-8a5f-de74aef6fc64 (squid) was prepared for execution. 2026-02-20 01:54:12.812890 | orchestrator | 2026-02-20 01:54:12 | INFO  | It takes a moment until task 9c5988c2-3983-4297-8a5f-de74aef6fc64 (squid) has been started and output is visible here. 2026-02-20 01:56:08.235345 | orchestrator | 2026-02-20 01:56:08.235493 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-20 01:56:08.235518 | orchestrator | 2026-02-20 01:56:08.235533 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-20 01:56:08.235546 | orchestrator | Friday 20 February 2026 01:54:18 +0000 (0:00:00.216) 0:00:00.216 ******* 2026-02-20 01:56:08.235555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-20 01:56:08.235564 | orchestrator | 2026-02-20 01:56:08.235572 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-20 01:56:08.235581 | orchestrator | Friday 20 February 2026 01:54:18 +0000 (0:00:00.091) 0:00:00.307 ******* 2026-02-20 01:56:08.235589 | orchestrator | ok: [testbed-manager] 2026-02-20 01:56:08.235598 | orchestrator | 2026-02-20 01:56:08.235606 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-20 01:56:08.235614 | orchestrator | Friday 20 February 2026 01:54:20 +0000 (0:00:01.814) 0:00:02.122 ******* 2026-02-20 01:56:08.235622 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-20 01:56:08.235630 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-20 01:56:08.235638 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-20 01:56:08.235646 | orchestrator | 2026-02-20 01:56:08.235654 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-20 01:56:08.235662 | orchestrator | Friday 20 February 2026 01:54:21 +0000 (0:00:01.353) 0:00:03.475 ******* 2026-02-20 01:56:08.235670 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-20 01:56:08.235678 | orchestrator | 2026-02-20 01:56:08.235686 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-20 01:56:08.235694 | orchestrator | Friday 20 February 2026 01:54:22 +0000 (0:00:01.084) 0:00:04.560 ******* 2026-02-20 01:56:08.235702 | orchestrator | ok: [testbed-manager] 2026-02-20 01:56:08.235709 | orchestrator | 2026-02-20 01:56:08.235717 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-20 01:56:08.235725 | orchestrator | Friday 20 February 2026 01:54:23 +0000 (0:00:00.412) 0:00:04.972 ******* 2026-02-20 01:56:08.235734 | orchestrator | changed: [testbed-manager] 2026-02-20 01:56:08.235742 | orchestrator | 2026-02-20 01:56:08.235750 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-20 01:56:08.235758 | orchestrator | Friday 20 February 2026 01:54:24 +0000 (0:00:01.107) 0:00:06.079 ******* 2026-02-20 01:56:08.235766 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-20 01:56:08.235779 | orchestrator | ok: [testbed-manager] 2026-02-20 01:56:08.235787 | orchestrator | 2026-02-20 01:56:08.235795 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-20 01:56:08.235832 | orchestrator | Friday 20 February 2026 01:54:55 +0000 (0:00:30.841) 0:00:36.921 ******* 2026-02-20 01:56:08.235840 | orchestrator | changed: [testbed-manager] 2026-02-20 01:56:08.235848 | orchestrator | 2026-02-20 01:56:08.235856 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-20 01:56:08.235864 | orchestrator | Friday 20 February 2026 01:55:07 +0000 (0:00:11.955) 0:00:48.876 ******* 2026-02-20 01:56:08.235872 | orchestrator | Pausing for 60 seconds 2026-02-20 01:56:08.235880 | orchestrator | changed: [testbed-manager] 2026-02-20 01:56:08.235888 | orchestrator | 2026-02-20 01:56:08.235896 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-20 01:56:08.235906 | orchestrator | Friday 20 February 2026 01:56:07 +0000 (0:01:00.091) 0:01:48.968 ******* 2026-02-20 01:56:08.235915 | orchestrator | ok: [testbed-manager] 2026-02-20 01:56:08.235925 | orchestrator | 2026-02-20 01:56:08.235934 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-20 01:56:08.235945 | orchestrator | Friday 20 February 2026 01:56:07 +0000 (0:00:00.071) 0:01:49.039 ******* 2026-02-20 01:56:08.235954 | orchestrator | changed: [testbed-manager] 2026-02-20 01:56:08.235964 | orchestrator | 2026-02-20 01:56:08.235973 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 01:56:08.235982 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 01:56:08.235991 | orchestrator | 2026-02-20 01:56:08.236003 | orchestrator | 2026-02-20 01:56:08.236017 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 01:56:08.236030 | orchestrator | Friday 20 February 2026 01:56:07 +0000 (0:00:00.685) 0:01:49.725 ******* 2026-02-20 01:56:08.236044 | orchestrator | =============================================================================== 2026-02-20 01:56:08.236057 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-02-20 01:56:08.236071 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.84s 2026-02-20 01:56:08.236084 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.96s 2026-02-20 01:56:08.236117 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.81s 2026-02-20 01:56:08.236133 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.35s 2026-02-20 01:56:08.236146 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.11s 2026-02-20 01:56:08.236160 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.08s 2026-02-20 01:56:08.236174 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.69s 2026-02-20 01:56:08.236187 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.41s 2026-02-20 01:56:08.236201 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-02-20 01:56:08.236215 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-02-20 01:56:08.648337 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-20 01:56:08.648569 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-20 01:56:08.691050 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-20 01:56:08.691168 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-20 01:56:08.695659 | orchestrator | + set -e 2026-02-20 01:56:08.695761 | orchestrator | + NAMESPACE=kolla/release 2026-02-20 01:56:08.695787 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-20 01:56:08.698835 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-20 01:56:08.746616 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-20 01:56:08.746955 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-20 01:56:21.058956 | orchestrator | 2026-02-20 01:56:21 | INFO  | Task 1fafb16d-63a2-48e4-bbd7-66d835438e32 (operator) was prepared for execution. 2026-02-20 01:56:21.059052 | orchestrator | 2026-02-20 01:56:21 | INFO  | It takes a moment until task 1fafb16d-63a2-48e4-bbd7-66d835438e32 (operator) has been started and output is visible here. 2026-02-20 01:56:39.550697 | orchestrator | 2026-02-20 01:56:39.550832 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-20 01:56:39.550859 | orchestrator | 2026-02-20 01:56:39.550879 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-20 01:56:39.550898 | orchestrator | Friday 20 February 2026 01:56:26 +0000 (0:00:00.168) 0:00:00.168 ******* 2026-02-20 01:56:39.550917 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:56:39.550935 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:56:39.550954 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:56:39.550973 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:56:39.550991 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:56:39.551009 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:56:39.551029 | orchestrator | 2026-02-20 01:56:39.551047 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-20 01:56:39.551067 | orchestrator | Friday 20 February 2026 01:56:29 +0000 (0:00:03.416) 0:00:03.584 ******* 2026-02-20 01:56:39.551086 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:56:39.551104 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:56:39.551123 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:56:39.551136 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:56:39.551147 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:56:39.551158 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:56:39.551168 | orchestrator | 2026-02-20 01:56:39.551179 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-20 01:56:39.551190 | orchestrator | 2026-02-20 01:56:39.551202 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-20 01:56:39.551214 | orchestrator | Friday 20 February 2026 01:56:30 +0000 (0:00:00.956) 0:00:04.540 ******* 2026-02-20 01:56:39.551226 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:56:39.551238 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:56:39.551251 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:56:39.551262 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:56:39.551275 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:56:39.551288 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:56:39.551301 | orchestrator | 2026-02-20 01:56:39.551313 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-20 01:56:39.551343 | orchestrator | Friday 20 February 2026 01:56:31 +0000 (0:00:00.190) 0:00:04.731 ******* 2026-02-20 01:56:39.551356 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:56:39.551368 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:56:39.551415 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:56:39.551428 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:56:39.551441 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:56:39.551453 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:56:39.551466 | orchestrator | 2026-02-20 01:56:39.551478 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-20 01:56:39.551490 | orchestrator | Friday 20 February 2026 01:56:31 +0000 (0:00:00.175) 0:00:04.906 ******* 2026-02-20 01:56:39.551503 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:56:39.551516 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:56:39.551528 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:56:39.551541 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:56:39.551554 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:56:39.551566 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:56:39.551577 | orchestrator | 2026-02-20 01:56:39.551588 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-20 01:56:39.551598 | orchestrator | Friday 20 February 2026 01:56:32 +0000 (0:00:00.804) 0:00:05.710 ******* 2026-02-20 01:56:39.551609 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:56:39.551620 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:56:39.551631 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:56:39.551642 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:56:39.551653 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:56:39.551663 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:56:39.551699 | orchestrator | 2026-02-20 01:56:39.551710 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-20 01:56:39.551721 | orchestrator | Friday 20 February 2026 01:56:32 +0000 (0:00:00.917) 0:00:06.628 ******* 2026-02-20 01:56:39.551732 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-20 01:56:39.551743 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-20 01:56:39.551754 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-20 01:56:39.551764 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-20 01:56:39.551775 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-20 01:56:39.551785 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-20 01:56:39.551796 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-20 01:56:39.551806 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-20 01:56:39.551817 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-20 01:56:39.551827 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-20 01:56:39.551838 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-20 01:56:39.551848 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-20 01:56:39.551859 | orchestrator | 2026-02-20 01:56:39.551870 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-20 01:56:39.551880 | orchestrator | Friday 20 February 2026 01:56:34 +0000 (0:00:01.361) 0:00:07.990 ******* 2026-02-20 01:56:39.551891 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:56:39.551901 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:56:39.551912 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:56:39.551922 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:56:39.551933 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:56:39.551943 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:56:39.551968 | orchestrator | 2026-02-20 01:56:39.551989 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-20 01:56:39.552002 | orchestrator | Friday 20 February 2026 01:56:35 +0000 (0:00:01.344) 0:00:09.334 ******* 2026-02-20 01:56:39.552013 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-20 01:56:39.552023 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-20 01:56:39.552034 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-20 01:56:39.552045 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-20 01:56:39.552077 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-20 01:56:39.552088 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-20 01:56:39.552098 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-20 01:56:39.552109 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-20 01:56:39.552119 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-20 01:56:39.552130 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-20 01:56:39.552140 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-20 01:56:39.552151 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-20 01:56:39.552161 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-20 01:56:39.552172 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-20 01:56:39.552182 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-20 01:56:39.552193 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-20 01:56:39.552203 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-20 01:56:39.552213 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-20 01:56:39.552224 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-20 01:56:39.552234 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-20 01:56:39.552253 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-20 01:56:39.552263 | orchestrator | 2026-02-20 01:56:39.552274 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-20 01:56:39.552285 | orchestrator | Friday 20 February 2026 01:56:37 +0000 (0:00:01.428) 0:00:10.763 ******* 2026-02-20 01:56:39.552296 | orchestrator | skipping: [testbed-node-0] 2026-02-20 01:56:39.552306 | orchestrator | skipping: [testbed-node-1] 2026-02-20 01:56:39.552317 | orchestrator | skipping: [testbed-node-2] 2026-02-20 01:56:39.552327 | orchestrator | skipping: [testbed-node-3] 2026-02-20 01:56:39.552338 | orchestrator | skipping: [testbed-node-4] 2026-02-20 01:56:39.552349 | orchestrator | skipping: [testbed-node-5] 2026-02-20 01:56:39.552359 | orchestrator | 2026-02-20 01:56:39.552370 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-20 01:56:39.552415 | orchestrator | Friday 20 February 2026 01:56:37 +0000 (0:00:00.189) 0:00:10.952 ******* 2026-02-20 01:56:39.552432 | orchestrator | skipping: [testbed-node-0] 2026-02-20 01:56:39.552444 | orchestrator | skipping: [testbed-node-1] 2026-02-20 01:56:39.552454 | orchestrator | skipping: [testbed-node-2] 2026-02-20 01:56:39.552465 | orchestrator | skipping: [testbed-node-3] 2026-02-20 01:56:39.552475 | orchestrator | skipping: [testbed-node-4] 2026-02-20 01:56:39.552494 | orchestrator | skipping: [testbed-node-5] 2026-02-20 01:56:39.552511 | orchestrator | 2026-02-20 01:56:39.552530 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-20 01:56:39.552548 | orchestrator | Friday 20 February 2026 01:56:37 +0000 (0:00:00.173) 0:00:11.126 ******* 2026-02-20 01:56:39.552564 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:56:39.552580 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:56:39.552598 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:56:39.552617 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:56:39.552636 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:56:39.552654 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:56:39.552670 | orchestrator | 2026-02-20 01:56:39.552681 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-20 01:56:39.552698 | orchestrator | Friday 20 February 2026 01:56:38 +0000 (0:00:00.694) 0:00:11.821 ******* 2026-02-20 01:56:39.552716 | orchestrator | skipping: [testbed-node-0] 2026-02-20 01:56:39.552743 | orchestrator | skipping: [testbed-node-1] 2026-02-20 01:56:39.552762 | orchestrator | skipping: [testbed-node-2] 2026-02-20 01:56:39.552779 | orchestrator | skipping: [testbed-node-3] 2026-02-20 01:56:39.552795 | orchestrator | skipping: [testbed-node-4] 2026-02-20 01:56:39.552812 | orchestrator | skipping: [testbed-node-5] 2026-02-20 01:56:39.552828 | orchestrator | 2026-02-20 01:56:39.552844 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-20 01:56:39.552861 | orchestrator | Friday 20 February 2026 01:56:38 +0000 (0:00:00.194) 0:00:12.015 ******* 2026-02-20 01:56:39.552876 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-20 01:56:39.552908 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:56:39.552926 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-20 01:56:39.552943 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:56:39.552959 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-20 01:56:39.552975 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-20 01:56:39.552992 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:56:39.553009 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:56:39.553026 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-20 01:56:39.553044 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:56:39.553061 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-20 01:56:39.553079 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:56:39.553096 | orchestrator | 2026-02-20 01:56:39.553113 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-20 01:56:39.553129 | orchestrator | Friday 20 February 2026 01:56:39 +0000 (0:00:00.837) 0:00:12.852 ******* 2026-02-20 01:56:39.553164 | orchestrator | skipping: [testbed-node-0] 2026-02-20 01:56:39.553183 | orchestrator | skipping: [testbed-node-1] 2026-02-20 01:56:39.553202 | orchestrator | skipping: [testbed-node-2] 2026-02-20 01:56:39.553220 | orchestrator | skipping: [testbed-node-3] 2026-02-20 01:56:39.553239 | orchestrator | skipping: [testbed-node-4] 2026-02-20 01:56:39.553256 | orchestrator | skipping: [testbed-node-5] 2026-02-20 01:56:39.553273 | orchestrator | 2026-02-20 01:56:39.553284 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-20 01:56:39.553295 | orchestrator | Friday 20 February 2026 01:56:39 +0000 (0:00:00.175) 0:00:13.028 ******* 2026-02-20 01:56:39.553306 | orchestrator | skipping: [testbed-node-0] 2026-02-20 01:56:39.553316 | orchestrator | skipping: [testbed-node-1] 2026-02-20 01:56:39.553327 | orchestrator | skipping: [testbed-node-2] 2026-02-20 01:56:39.553338 | orchestrator | skipping: [testbed-node-3] 2026-02-20 01:56:39.553364 | orchestrator | skipping: [testbed-node-4] 2026-02-20 01:56:41.064890 | orchestrator | skipping: [testbed-node-5] 2026-02-20 01:56:41.064980 | orchestrator | 2026-02-20 01:56:41.064992 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-20 01:56:41.065005 | orchestrator | Friday 20 February 2026 01:56:39 +0000 (0:00:00.168) 0:00:13.196 ******* 2026-02-20 01:56:41.065018 | orchestrator | skipping: [testbed-node-0] 2026-02-20 01:56:41.065031 | orchestrator | skipping: [testbed-node-1] 2026-02-20 01:56:41.065044 | orchestrator | skipping: [testbed-node-2] 2026-02-20 01:56:41.065057 | orchestrator | skipping: [testbed-node-3] 2026-02-20 01:56:41.065070 | orchestrator | skipping: [testbed-node-4] 2026-02-20 01:56:41.065083 | orchestrator | skipping: [testbed-node-5] 2026-02-20 01:56:41.065096 | orchestrator | 2026-02-20 01:56:41.065110 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-20 01:56:41.065123 | orchestrator | Friday 20 February 2026 01:56:39 +0000 (0:00:00.179) 0:00:13.375 ******* 2026-02-20 01:56:41.065137 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:56:41.065147 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:56:41.065155 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:56:41.065162 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:56:41.065170 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:56:41.065178 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:56:41.065186 | orchestrator | 2026-02-20 01:56:41.065194 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-20 01:56:41.065202 | orchestrator | Friday 20 February 2026 01:56:40 +0000 (0:00:00.769) 0:00:14.144 ******* 2026-02-20 01:56:41.065214 | orchestrator | skipping: [testbed-node-0] 2026-02-20 01:56:41.065226 | orchestrator | skipping: [testbed-node-1] 2026-02-20 01:56:41.065250 | orchestrator | skipping: [testbed-node-2] 2026-02-20 01:56:41.065262 | orchestrator | skipping: [testbed-node-3] 2026-02-20 01:56:41.065274 | orchestrator | skipping: [testbed-node-4] 2026-02-20 01:56:41.065287 | orchestrator | skipping: [testbed-node-5] 2026-02-20 01:56:41.065299 | orchestrator | 2026-02-20 01:56:41.065311 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 01:56:41.065347 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-20 01:56:41.065363 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-20 01:56:41.065412 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-20 01:56:41.065427 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-20 01:56:41.065441 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-20 01:56:41.065482 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-20 01:56:41.065495 | orchestrator | 2026-02-20 01:56:41.065508 | orchestrator | 2026-02-20 01:56:41.065521 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 01:56:41.065534 | orchestrator | Friday 20 February 2026 01:56:40 +0000 (0:00:00.240) 0:00:14.385 ******* 2026-02-20 01:56:41.065547 | orchestrator | =============================================================================== 2026-02-20 01:56:41.065561 | orchestrator | Gathering Facts --------------------------------------------------------- 3.42s 2026-02-20 01:56:41.065575 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.43s 2026-02-20 01:56:41.065591 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.36s 2026-02-20 01:56:41.065605 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.34s 2026-02-20 01:56:41.065618 | orchestrator | Do not require tty for all users ---------------------------------------- 0.96s 2026-02-20 01:56:41.065632 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.92s 2026-02-20 01:56:41.065645 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.84s 2026-02-20 01:56:41.065659 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.80s 2026-02-20 01:56:41.065673 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.77s 2026-02-20 01:56:41.065688 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.69s 2026-02-20 01:56:41.065701 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2026-02-20 01:56:41.065715 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2026-02-20 01:56:41.065728 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2026-02-20 01:56:41.065743 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.19s 2026-02-20 01:56:41.065757 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2026-02-20 01:56:41.065770 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2026-02-20 01:56:41.065782 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-02-20 01:56:41.065794 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-02-20 01:56:41.065802 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2026-02-20 01:56:41.488835 | orchestrator | + osism apply --environment custom facts 2026-02-20 01:56:43.654144 | orchestrator | 2026-02-20 01:56:43 | INFO  | Trying to run play facts in environment custom 2026-02-20 01:56:53.816207 | orchestrator | 2026-02-20 01:56:53 | INFO  | Task f639e241-f0f3-4eec-8fe4-733523b9431e (facts) was prepared for execution. 2026-02-20 01:56:53.816319 | orchestrator | 2026-02-20 01:56:53 | INFO  | It takes a moment until task f639e241-f0f3-4eec-8fe4-733523b9431e (facts) has been started and output is visible here. 2026-02-20 01:57:42.344962 | orchestrator | 2026-02-20 01:57:42.345096 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-20 01:57:42.345120 | orchestrator | 2026-02-20 01:57:42.345139 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-20 01:57:42.345156 | orchestrator | Friday 20 February 2026 01:56:58 +0000 (0:00:00.110) 0:00:00.110 ******* 2026-02-20 01:57:42.345172 | orchestrator | ok: [testbed-manager] 2026-02-20 01:57:42.345190 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:57:42.345207 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:57:42.345224 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:57:42.345238 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:57:42.345252 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:57:42.345300 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:57:42.345316 | orchestrator | 2026-02-20 01:57:42.345331 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-20 01:57:42.345347 | orchestrator | Friday 20 February 2026 01:57:00 +0000 (0:00:01.471) 0:00:01.582 ******* 2026-02-20 01:57:42.345362 | orchestrator | ok: [testbed-manager] 2026-02-20 01:57:42.345378 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:57:42.345393 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:57:42.345483 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:57:42.345506 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:57:42.345522 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:57:42.345538 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:57:42.345553 | orchestrator | 2026-02-20 01:57:42.345569 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-20 01:57:42.345586 | orchestrator | 2026-02-20 01:57:42.345603 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-20 01:57:42.345622 | orchestrator | Friday 20 February 2026 01:57:01 +0000 (0:00:01.388) 0:00:02.970 ******* 2026-02-20 01:57:42.345639 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:57:42.345653 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:57:42.345670 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:57:42.345685 | orchestrator | 2026-02-20 01:57:42.345701 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-20 01:57:42.345715 | orchestrator | Friday 20 February 2026 01:57:01 +0000 (0:00:00.115) 0:00:03.086 ******* 2026-02-20 01:57:42.345729 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:57:42.345743 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:57:42.345759 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:57:42.345772 | orchestrator | 2026-02-20 01:57:42.345784 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-20 01:57:42.345796 | orchestrator | Friday 20 February 2026 01:57:02 +0000 (0:00:00.224) 0:00:03.311 ******* 2026-02-20 01:57:42.345809 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:57:42.345823 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:57:42.345836 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:57:42.345848 | orchestrator | 2026-02-20 01:57:42.345859 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-20 01:57:42.345873 | orchestrator | Friday 20 February 2026 01:57:02 +0000 (0:00:00.244) 0:00:03.555 ******* 2026-02-20 01:57:42.345886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 01:57:42.345899 | orchestrator | 2026-02-20 01:57:42.345912 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-20 01:57:42.345926 | orchestrator | Friday 20 February 2026 01:57:02 +0000 (0:00:00.151) 0:00:03.706 ******* 2026-02-20 01:57:42.345938 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:57:42.345949 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:57:42.345963 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:57:42.345974 | orchestrator | 2026-02-20 01:57:42.345986 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-20 01:57:42.346000 | orchestrator | Friday 20 February 2026 01:57:03 +0000 (0:00:00.483) 0:00:04.190 ******* 2026-02-20 01:57:42.346080 | orchestrator | skipping: [testbed-node-3] 2026-02-20 01:57:42.346101 | orchestrator | skipping: [testbed-node-4] 2026-02-20 01:57:42.346113 | orchestrator | skipping: [testbed-node-5] 2026-02-20 01:57:42.346125 | orchestrator | 2026-02-20 01:57:42.346137 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-20 01:57:42.346150 | orchestrator | Friday 20 February 2026 01:57:03 +0000 (0:00:00.135) 0:00:04.325 ******* 2026-02-20 01:57:42.346164 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:57:42.346176 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:57:42.346190 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:57:42.346204 | orchestrator | 2026-02-20 01:57:42.346218 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-20 01:57:42.346248 | orchestrator | Friday 20 February 2026 01:57:04 +0000 (0:00:01.133) 0:00:05.459 ******* 2026-02-20 01:57:42.346262 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:57:42.346275 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:57:42.346290 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:57:42.346304 | orchestrator | 2026-02-20 01:57:42.346318 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-20 01:57:42.346331 | orchestrator | Friday 20 February 2026 01:57:04 +0000 (0:00:00.549) 0:00:06.009 ******* 2026-02-20 01:57:42.346344 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:57:42.346358 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:57:42.346372 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:57:42.346385 | orchestrator | 2026-02-20 01:57:42.346398 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-20 01:57:42.346492 | orchestrator | Friday 20 February 2026 01:57:05 +0000 (0:00:01.073) 0:00:07.082 ******* 2026-02-20 01:57:42.346510 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:57:42.346523 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:57:42.346537 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:57:42.346551 | orchestrator | 2026-02-20 01:57:42.346564 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-20 01:57:42.346577 | orchestrator | Friday 20 February 2026 01:57:23 +0000 (0:00:17.495) 0:00:24.578 ******* 2026-02-20 01:57:42.346590 | orchestrator | skipping: [testbed-node-3] 2026-02-20 01:57:42.346604 | orchestrator | skipping: [testbed-node-4] 2026-02-20 01:57:42.346618 | orchestrator | skipping: [testbed-node-5] 2026-02-20 01:57:42.346631 | orchestrator | 2026-02-20 01:57:42.346645 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-20 01:57:42.346685 | orchestrator | Friday 20 February 2026 01:57:23 +0000 (0:00:00.105) 0:00:24.683 ******* 2026-02-20 01:57:42.346699 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:57:42.346713 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:57:42.346726 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:57:42.346741 | orchestrator | 2026-02-20 01:57:42.346754 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-20 01:57:42.346767 | orchestrator | Friday 20 February 2026 01:57:32 +0000 (0:00:08.472) 0:00:33.155 ******* 2026-02-20 01:57:42.346780 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:57:42.346793 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:57:42.346807 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:57:42.346819 | orchestrator | 2026-02-20 01:57:42.346833 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-20 01:57:42.346847 | orchestrator | Friday 20 February 2026 01:57:32 +0000 (0:00:00.477) 0:00:33.633 ******* 2026-02-20 01:57:42.346859 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-20 01:57:42.346874 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-20 01:57:42.346888 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-20 01:57:42.346902 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-20 01:57:42.346924 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-20 01:57:42.346939 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-20 01:57:42.346953 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-20 01:57:42.346967 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-20 01:57:42.346981 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-20 01:57:42.346995 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-20 01:57:42.347009 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-20 01:57:42.347023 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-20 01:57:42.347037 | orchestrator | 2026-02-20 01:57:42.347051 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-20 01:57:42.347079 | orchestrator | Friday 20 February 2026 01:57:36 +0000 (0:00:03.715) 0:00:37.348 ******* 2026-02-20 01:57:42.347094 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:57:42.347108 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:57:42.347123 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:57:42.347137 | orchestrator | 2026-02-20 01:57:42.347150 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-20 01:57:42.347164 | orchestrator | 2026-02-20 01:57:42.347177 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-20 01:57:42.347191 | orchestrator | Friday 20 February 2026 01:57:37 +0000 (0:00:01.479) 0:00:38.827 ******* 2026-02-20 01:57:42.347205 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:57:42.347218 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:57:42.347232 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:57:42.347245 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:57:42.347258 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:57:42.347273 | orchestrator | ok: [testbed-manager] 2026-02-20 01:57:42.347286 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:57:42.347300 | orchestrator | 2026-02-20 01:57:42.347313 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 01:57:42.347327 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 01:57:42.347343 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 01:57:42.347359 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 01:57:42.347372 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 01:57:42.347386 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 01:57:42.347400 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 01:57:42.347488 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 01:57:42.347502 | orchestrator | 2026-02-20 01:57:42.347515 | orchestrator | 2026-02-20 01:57:42.347527 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 01:57:42.347540 | orchestrator | Friday 20 February 2026 01:57:42 +0000 (0:00:04.647) 0:00:43.475 ******* 2026-02-20 01:57:42.347553 | orchestrator | =============================================================================== 2026-02-20 01:57:42.347566 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.50s 2026-02-20 01:57:42.347578 | orchestrator | Install required packages (Debian) -------------------------------------- 8.47s 2026-02-20 01:57:42.347592 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.65s 2026-02-20 01:57:42.347605 | orchestrator | Copy fact files --------------------------------------------------------- 3.72s 2026-02-20 01:57:42.347617 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.48s 2026-02-20 01:57:42.347630 | orchestrator | Create custom facts directory ------------------------------------------- 1.47s 2026-02-20 01:57:42.347662 | orchestrator | Copy fact file ---------------------------------------------------------- 1.39s 2026-02-20 01:57:42.645571 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.13s 2026-02-20 01:57:42.645654 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2026-02-20 01:57:42.645664 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.55s 2026-02-20 01:57:42.645696 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.48s 2026-02-20 01:57:42.645704 | orchestrator | Create custom facts directory ------------------------------------------- 0.48s 2026-02-20 01:57:42.645712 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2026-02-20 01:57:42.645720 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2026-02-20 01:57:42.645728 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-02-20 01:57:42.645737 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-02-20 01:57:42.645745 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-02-20 01:57:42.645766 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-02-20 01:57:43.080574 | orchestrator | + osism apply bootstrap 2026-02-20 01:57:55.489121 | orchestrator | 2026-02-20 01:57:55 | INFO  | Task 0a7ad519-579c-45cc-b067-c5cc9f2641e8 (bootstrap) was prepared for execution. 2026-02-20 01:57:55.489194 | orchestrator | 2026-02-20 01:57:55 | INFO  | It takes a moment until task 0a7ad519-579c-45cc-b067-c5cc9f2641e8 (bootstrap) has been started and output is visible here. 2026-02-20 01:58:14.985121 | orchestrator | 2026-02-20 01:58:14.985229 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-20 01:58:14.985257 | orchestrator | 2026-02-20 01:58:14.985278 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-20 01:58:14.985298 | orchestrator | Friday 20 February 2026 01:58:00 +0000 (0:00:00.193) 0:00:00.193 ******* 2026-02-20 01:58:14.985316 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:14.985335 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:58:14.985353 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:58:14.985370 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:58:14.985387 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:58:14.985404 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:58:14.985421 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:58:14.985578 | orchestrator | 2026-02-20 01:58:14.985597 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-20 01:58:14.985614 | orchestrator | 2026-02-20 01:58:14.985630 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-20 01:58:14.985648 | orchestrator | Friday 20 February 2026 01:58:01 +0000 (0:00:00.283) 0:00:00.477 ******* 2026-02-20 01:58:14.985667 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:58:14.985683 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:58:14.985700 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:58:14.985718 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:14.985734 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:58:14.985752 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:58:14.985768 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:58:14.985786 | orchestrator | 2026-02-20 01:58:14.985802 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-20 01:58:14.985820 | orchestrator | 2026-02-20 01:58:14.985837 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-20 01:58:14.985855 | orchestrator | Friday 20 February 2026 01:58:04 +0000 (0:00:03.607) 0:00:04.084 ******* 2026-02-20 01:58:14.985873 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-20 01:58:14.985891 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-20 01:58:14.985908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-20 01:58:14.985925 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-20 01:58:14.985943 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 01:58:14.985960 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-20 01:58:14.985978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 01:58:14.985995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 01:58:14.986012 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-20 01:58:14.986147 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-20 01:58:14.986169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-20 01:58:14.986188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-20 01:58:14.986206 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-20 01:58:14.986224 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 01:58:14.986243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-20 01:58:14.986263 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-20 01:58:14.986282 | orchestrator | skipping: [testbed-node-3] 2026-02-20 01:58:14.986300 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-20 01:58:14.986319 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-20 01:58:14.986337 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:58:14.986357 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 01:58:14.986377 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-20 01:58:14.986397 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 01:58:14.986416 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-20 01:58:14.986463 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-20 01:58:14.986482 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-20 01:58:14.986500 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 01:58:14.986517 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-20 01:58:14.986534 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-20 01:58:14.986551 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 01:58:14.986568 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-20 01:58:14.986584 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-20 01:58:14.986601 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-20 01:58:14.986617 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 01:58:14.986634 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-20 01:58:14.986650 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-20 01:58:14.986666 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 01:58:14.986682 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 01:58:14.986699 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-20 01:58:14.986715 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 01:58:14.986732 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-20 01:58:14.986749 | orchestrator | skipping: [testbed-node-0] 2026-02-20 01:58:14.986765 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-20 01:58:14.986781 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-20 01:58:14.986798 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-20 01:58:14.986814 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-20 01:58:14.986858 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-20 01:58:14.986876 | orchestrator | skipping: [testbed-node-4] 2026-02-20 01:58:14.986893 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-20 01:58:14.986907 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-20 01:58:14.986922 | orchestrator | skipping: [testbed-node-2] 2026-02-20 01:58:14.986938 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-20 01:58:14.986953 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-20 01:58:14.986970 | orchestrator | skipping: [testbed-node-5] 2026-02-20 01:58:14.987000 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-20 01:58:14.987038 | orchestrator | skipping: [testbed-node-1] 2026-02-20 01:58:14.987055 | orchestrator | 2026-02-20 01:58:14.987071 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-20 01:58:14.987081 | orchestrator | 2026-02-20 01:58:14.987092 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-20 01:58:14.987110 | orchestrator | Friday 20 February 2026 01:58:05 +0000 (0:00:00.540) 0:00:04.624 ******* 2026-02-20 01:58:14.987126 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:58:14.987142 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:58:14.987158 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:58:14.987174 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:58:14.987188 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:58:14.987201 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:58:14.987215 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:14.987231 | orchestrator | 2026-02-20 01:58:14.987246 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-20 01:58:14.987261 | orchestrator | Friday 20 February 2026 01:58:06 +0000 (0:00:01.502) 0:00:06.127 ******* 2026-02-20 01:58:14.987275 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:58:14.987289 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:14.987305 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:58:14.987319 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:58:14.987335 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:58:14.987349 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:58:14.987365 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:58:14.987380 | orchestrator | 2026-02-20 01:58:14.987396 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-20 01:58:14.987412 | orchestrator | Friday 20 February 2026 01:58:09 +0000 (0:00:02.337) 0:00:08.465 ******* 2026-02-20 01:58:14.987460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 01:58:14.987479 | orchestrator | 2026-02-20 01:58:14.987494 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-20 01:58:14.987510 | orchestrator | Friday 20 February 2026 01:58:09 +0000 (0:00:00.336) 0:00:08.802 ******* 2026-02-20 01:58:14.987525 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:58:14.987541 | orchestrator | changed: [testbed-manager] 2026-02-20 01:58:14.987556 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:58:14.987574 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:58:14.987589 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:58:14.987604 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:58:14.987619 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:58:14.987636 | orchestrator | 2026-02-20 01:58:14.987653 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-20 01:58:14.987669 | orchestrator | Friday 20 February 2026 01:58:12 +0000 (0:00:02.625) 0:00:11.427 ******* 2026-02-20 01:58:14.987684 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:58:14.987700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 01:58:14.987717 | orchestrator | 2026-02-20 01:58:14.987732 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-20 01:58:14.987747 | orchestrator | Friday 20 February 2026 01:58:12 +0000 (0:00:00.332) 0:00:11.760 ******* 2026-02-20 01:58:14.987762 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:58:14.987777 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:58:14.987791 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:58:14.987806 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:58:14.987820 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:58:14.987835 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:58:14.987877 | orchestrator | 2026-02-20 01:58:14.987892 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-20 01:58:14.987906 | orchestrator | Friday 20 February 2026 01:58:13 +0000 (0:00:01.135) 0:00:12.896 ******* 2026-02-20 01:58:14.987921 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:58:14.987936 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:58:14.987950 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:58:14.987965 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:58:14.987980 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:58:14.987995 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:58:14.988009 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:58:14.988024 | orchestrator | 2026-02-20 01:58:14.988039 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-20 01:58:14.988054 | orchestrator | Friday 20 February 2026 01:58:14 +0000 (0:00:00.695) 0:00:13.591 ******* 2026-02-20 01:58:14.988069 | orchestrator | skipping: [testbed-node-3] 2026-02-20 01:58:14.988083 | orchestrator | skipping: [testbed-node-4] 2026-02-20 01:58:14.988097 | orchestrator | skipping: [testbed-node-5] 2026-02-20 01:58:14.988122 | orchestrator | skipping: [testbed-node-0] 2026-02-20 01:58:14.988138 | orchestrator | skipping: [testbed-node-1] 2026-02-20 01:58:14.988153 | orchestrator | skipping: [testbed-node-2] 2026-02-20 01:58:14.988167 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:14.988182 | orchestrator | 2026-02-20 01:58:14.988197 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-20 01:58:14.988213 | orchestrator | Friday 20 February 2026 01:58:14 +0000 (0:00:00.502) 0:00:14.094 ******* 2026-02-20 01:58:14.988228 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:58:14.988242 | orchestrator | skipping: [testbed-node-3] 2026-02-20 01:58:14.988273 | orchestrator | skipping: [testbed-node-4] 2026-02-20 01:58:28.521881 | orchestrator | skipping: [testbed-node-5] 2026-02-20 01:58:28.521989 | orchestrator | skipping: [testbed-node-0] 2026-02-20 01:58:28.522004 | orchestrator | skipping: [testbed-node-1] 2026-02-20 01:58:28.522075 | orchestrator | skipping: [testbed-node-2] 2026-02-20 01:58:28.522091 | orchestrator | 2026-02-20 01:58:28.522127 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-20 01:58:28.522141 | orchestrator | Friday 20 February 2026 01:58:15 +0000 (0:00:00.252) 0:00:14.346 ******* 2026-02-20 01:58:28.522154 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 01:58:28.522183 | orchestrator | 2026-02-20 01:58:28.522195 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-20 01:58:28.522206 | orchestrator | Friday 20 February 2026 01:58:15 +0000 (0:00:00.362) 0:00:14.709 ******* 2026-02-20 01:58:28.522217 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 01:58:28.522228 | orchestrator | 2026-02-20 01:58:28.522239 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-20 01:58:28.522249 | orchestrator | Friday 20 February 2026 01:58:15 +0000 (0:00:00.315) 0:00:15.025 ******* 2026-02-20 01:58:28.522260 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:58:28.522272 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:28.522283 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:58:28.522294 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:58:28.522305 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:58:28.522316 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:58:28.522326 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:58:28.522337 | orchestrator | 2026-02-20 01:58:28.522348 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-20 01:58:28.522359 | orchestrator | Friday 20 February 2026 01:58:17 +0000 (0:00:01.664) 0:00:16.689 ******* 2026-02-20 01:58:28.522395 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:58:28.522408 | orchestrator | skipping: [testbed-node-3] 2026-02-20 01:58:28.522421 | orchestrator | skipping: [testbed-node-4] 2026-02-20 01:58:28.522455 | orchestrator | skipping: [testbed-node-5] 2026-02-20 01:58:28.522468 | orchestrator | skipping: [testbed-node-0] 2026-02-20 01:58:28.522481 | orchestrator | skipping: [testbed-node-1] 2026-02-20 01:58:28.522493 | orchestrator | skipping: [testbed-node-2] 2026-02-20 01:58:28.522505 | orchestrator | 2026-02-20 01:58:28.522518 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-20 01:58:28.522531 | orchestrator | Friday 20 February 2026 01:58:17 +0000 (0:00:00.360) 0:00:17.050 ******* 2026-02-20 01:58:28.522543 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:28.522556 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:58:28.522567 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:58:28.522577 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:58:28.522588 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:58:28.522599 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:58:28.522609 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:58:28.522620 | orchestrator | 2026-02-20 01:58:28.522631 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-20 01:58:28.522642 | orchestrator | Friday 20 February 2026 01:58:18 +0000 (0:00:00.618) 0:00:17.668 ******* 2026-02-20 01:58:28.522652 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:58:28.522663 | orchestrator | skipping: [testbed-node-3] 2026-02-20 01:58:28.522674 | orchestrator | skipping: [testbed-node-4] 2026-02-20 01:58:28.522684 | orchestrator | skipping: [testbed-node-5] 2026-02-20 01:58:28.522695 | orchestrator | skipping: [testbed-node-0] 2026-02-20 01:58:28.522706 | orchestrator | skipping: [testbed-node-1] 2026-02-20 01:58:28.522717 | orchestrator | skipping: [testbed-node-2] 2026-02-20 01:58:28.522727 | orchestrator | 2026-02-20 01:58:28.522739 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-20 01:58:28.522751 | orchestrator | Friday 20 February 2026 01:58:18 +0000 (0:00:00.255) 0:00:17.924 ******* 2026-02-20 01:58:28.522762 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:28.522773 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:58:28.522783 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:58:28.522794 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:58:28.522804 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:58:28.522815 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:58:28.522825 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:58:28.522836 | orchestrator | 2026-02-20 01:58:28.522847 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-20 01:58:28.522858 | orchestrator | Friday 20 February 2026 01:58:19 +0000 (0:00:00.613) 0:00:18.538 ******* 2026-02-20 01:58:28.522868 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:28.522879 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:58:28.522890 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:58:28.522900 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:58:28.522911 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:58:28.522921 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:58:28.522932 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:58:28.522942 | orchestrator | 2026-02-20 01:58:28.522953 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-20 01:58:28.522964 | orchestrator | Friday 20 February 2026 01:58:20 +0000 (0:00:01.167) 0:00:19.705 ******* 2026-02-20 01:58:28.522975 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:58:28.522994 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:58:28.523005 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:28.523016 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:58:28.523026 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:58:28.523037 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:58:28.523048 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:58:28.523058 | orchestrator | 2026-02-20 01:58:28.523069 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-20 01:58:28.523087 | orchestrator | Friday 20 February 2026 01:58:21 +0000 (0:00:01.118) 0:00:20.824 ******* 2026-02-20 01:58:28.523119 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 01:58:28.523131 | orchestrator | 2026-02-20 01:58:28.523143 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-20 01:58:28.523153 | orchestrator | Friday 20 February 2026 01:58:21 +0000 (0:00:00.365) 0:00:21.189 ******* 2026-02-20 01:58:28.523164 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:58:28.523175 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:58:28.523185 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:58:28.523196 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:58:28.523207 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:58:28.523217 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:58:28.523228 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:58:28.523239 | orchestrator | 2026-02-20 01:58:28.523249 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-20 01:58:28.523260 | orchestrator | Friday 20 February 2026 01:58:23 +0000 (0:00:01.471) 0:00:22.661 ******* 2026-02-20 01:58:28.523271 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:28.523282 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:58:28.523292 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:58:28.523303 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:58:28.523314 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:58:28.523324 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:58:28.523335 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:58:28.523346 | orchestrator | 2026-02-20 01:58:28.523357 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-20 01:58:28.523367 | orchestrator | Friday 20 February 2026 01:58:23 +0000 (0:00:00.286) 0:00:22.948 ******* 2026-02-20 01:58:28.523378 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:28.523389 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:58:28.523399 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:58:28.523410 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:58:28.523420 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:58:28.523463 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:58:28.523475 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:58:28.523486 | orchestrator | 2026-02-20 01:58:28.523497 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-20 01:58:28.523508 | orchestrator | Friday 20 February 2026 01:58:23 +0000 (0:00:00.252) 0:00:23.200 ******* 2026-02-20 01:58:28.523519 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:28.523529 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:58:28.523540 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:58:28.523551 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:58:28.523561 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:58:28.523572 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:58:28.523582 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:58:28.523593 | orchestrator | 2026-02-20 01:58:28.523604 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-20 01:58:28.523615 | orchestrator | Friday 20 February 2026 01:58:24 +0000 (0:00:00.265) 0:00:23.465 ******* 2026-02-20 01:58:28.523627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 01:58:28.523639 | orchestrator | 2026-02-20 01:58:28.523650 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-20 01:58:28.523661 | orchestrator | Friday 20 February 2026 01:58:24 +0000 (0:00:00.315) 0:00:23.781 ******* 2026-02-20 01:58:28.523671 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:28.523682 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:58:28.523701 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:58:28.523723 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:58:28.523734 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:58:28.523745 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:58:28.523755 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:58:28.523766 | orchestrator | 2026-02-20 01:58:28.523777 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-20 01:58:28.523788 | orchestrator | Friday 20 February 2026 01:58:25 +0000 (0:00:00.676) 0:00:24.457 ******* 2026-02-20 01:58:28.523798 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:58:28.523809 | orchestrator | skipping: [testbed-node-3] 2026-02-20 01:58:28.523820 | orchestrator | skipping: [testbed-node-4] 2026-02-20 01:58:28.523831 | orchestrator | skipping: [testbed-node-5] 2026-02-20 01:58:28.523842 | orchestrator | skipping: [testbed-node-0] 2026-02-20 01:58:28.523853 | orchestrator | skipping: [testbed-node-1] 2026-02-20 01:58:28.523863 | orchestrator | skipping: [testbed-node-2] 2026-02-20 01:58:28.523874 | orchestrator | 2026-02-20 01:58:28.523885 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-20 01:58:28.523896 | orchestrator | Friday 20 February 2026 01:58:25 +0000 (0:00:00.285) 0:00:24.742 ******* 2026-02-20 01:58:28.523909 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:28.523928 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:58:28.523947 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:58:28.523964 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:58:28.523982 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:58:28.524000 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:58:28.524015 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:58:28.524032 | orchestrator | 2026-02-20 01:58:28.524049 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-20 01:58:28.524066 | orchestrator | Friday 20 February 2026 01:58:26 +0000 (0:00:01.238) 0:00:25.981 ******* 2026-02-20 01:58:28.524083 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:28.524100 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:58:28.524118 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:58:28.524136 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:58:28.524154 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:58:28.524171 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:58:28.524188 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:58:28.524206 | orchestrator | 2026-02-20 01:58:28.524225 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-20 01:58:28.524243 | orchestrator | Friday 20 February 2026 01:58:27 +0000 (0:00:00.627) 0:00:26.609 ******* 2026-02-20 01:58:28.524261 | orchestrator | ok: [testbed-manager] 2026-02-20 01:58:28.524280 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:58:28.524297 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:58:28.524325 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:58:28.524355 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:59:13.601831 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:59:13.601933 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:59:13.601947 | orchestrator | 2026-02-20 01:59:13.601959 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-20 01:59:13.601970 | orchestrator | Friday 20 February 2026 01:58:28 +0000 (0:00:01.176) 0:00:27.785 ******* 2026-02-20 01:59:13.601980 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:59:13.601991 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:59:13.602001 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:59:13.602011 | orchestrator | changed: [testbed-manager] 2026-02-20 01:59:13.602111 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:59:13.602123 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:59:13.602133 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:59:13.602142 | orchestrator | 2026-02-20 01:59:13.602153 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-20 01:59:13.602163 | orchestrator | Friday 20 February 2026 01:58:47 +0000 (0:00:18.792) 0:00:46.578 ******* 2026-02-20 01:59:13.602173 | orchestrator | ok: [testbed-manager] 2026-02-20 01:59:13.602218 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:59:13.602228 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:59:13.602237 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:59:13.602247 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:59:13.602256 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:59:13.602265 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:59:13.602275 | orchestrator | 2026-02-20 01:59:13.602285 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-20 01:59:13.602294 | orchestrator | Friday 20 February 2026 01:58:47 +0000 (0:00:00.267) 0:00:46.846 ******* 2026-02-20 01:59:13.602304 | orchestrator | ok: [testbed-manager] 2026-02-20 01:59:13.602313 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:59:13.602323 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:59:13.602332 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:59:13.602341 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:59:13.602352 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:59:13.602363 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:59:13.602374 | orchestrator | 2026-02-20 01:59:13.602385 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-20 01:59:13.602396 | orchestrator | Friday 20 February 2026 01:58:47 +0000 (0:00:00.268) 0:00:47.114 ******* 2026-02-20 01:59:13.602407 | orchestrator | ok: [testbed-manager] 2026-02-20 01:59:13.602418 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:59:13.602429 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:59:13.602440 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:59:13.602477 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:59:13.602490 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:59:13.602502 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:59:13.602513 | orchestrator | 2026-02-20 01:59:13.602523 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-20 01:59:13.602535 | orchestrator | Friday 20 February 2026 01:58:48 +0000 (0:00:00.270) 0:00:47.384 ******* 2026-02-20 01:59:13.602548 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 01:59:13.602563 | orchestrator | 2026-02-20 01:59:13.602574 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-20 01:59:13.602585 | orchestrator | Friday 20 February 2026 01:58:48 +0000 (0:00:00.345) 0:00:47.730 ******* 2026-02-20 01:59:13.602596 | orchestrator | ok: [testbed-manager] 2026-02-20 01:59:13.602607 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:59:13.602618 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:59:13.602629 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:59:13.602640 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:59:13.602652 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:59:13.602662 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:59:13.602674 | orchestrator | 2026-02-20 01:59:13.602686 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-20 01:59:13.602697 | orchestrator | Friday 20 February 2026 01:58:50 +0000 (0:00:02.047) 0:00:49.777 ******* 2026-02-20 01:59:13.602708 | orchestrator | changed: [testbed-manager] 2026-02-20 01:59:13.602719 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:59:13.602728 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:59:13.602738 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:59:13.602748 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:59:13.602757 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:59:13.602767 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:59:13.602776 | orchestrator | 2026-02-20 01:59:13.602786 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-20 01:59:13.602795 | orchestrator | Friday 20 February 2026 01:58:51 +0000 (0:00:01.140) 0:00:50.918 ******* 2026-02-20 01:59:13.602805 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:59:13.602815 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:59:13.602824 | orchestrator | ok: [testbed-manager] 2026-02-20 01:59:13.602842 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:59:13.602852 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:59:13.602861 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:59:13.602871 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:59:13.602880 | orchestrator | 2026-02-20 01:59:13.602890 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-20 01:59:13.602900 | orchestrator | Friday 20 February 2026 01:58:52 +0000 (0:00:00.888) 0:00:51.806 ******* 2026-02-20 01:59:13.602910 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 01:59:13.602921 | orchestrator | 2026-02-20 01:59:13.602944 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-20 01:59:13.602955 | orchestrator | Friday 20 February 2026 01:58:52 +0000 (0:00:00.314) 0:00:52.121 ******* 2026-02-20 01:59:13.602965 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:59:13.602974 | orchestrator | changed: [testbed-manager] 2026-02-20 01:59:13.602984 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:59:13.602994 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:59:13.603003 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:59:13.603013 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:59:13.603022 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:59:13.603032 | orchestrator | 2026-02-20 01:59:13.603059 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-20 01:59:13.603069 | orchestrator | Friday 20 February 2026 01:58:53 +0000 (0:00:01.107) 0:00:53.228 ******* 2026-02-20 01:59:13.603079 | orchestrator | skipping: [testbed-manager] 2026-02-20 01:59:13.603089 | orchestrator | skipping: [testbed-node-3] 2026-02-20 01:59:13.603098 | orchestrator | skipping: [testbed-node-4] 2026-02-20 01:59:13.603108 | orchestrator | skipping: [testbed-node-5] 2026-02-20 01:59:13.603118 | orchestrator | skipping: [testbed-node-0] 2026-02-20 01:59:13.603127 | orchestrator | skipping: [testbed-node-1] 2026-02-20 01:59:13.603136 | orchestrator | skipping: [testbed-node-2] 2026-02-20 01:59:13.603146 | orchestrator | 2026-02-20 01:59:13.603156 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-20 01:59:13.603165 | orchestrator | Friday 20 February 2026 01:58:54 +0000 (0:00:00.268) 0:00:53.497 ******* 2026-02-20 01:59:13.603175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 01:59:13.603185 | orchestrator | 2026-02-20 01:59:13.603195 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-20 01:59:13.603204 | orchestrator | Friday 20 February 2026 01:58:54 +0000 (0:00:00.356) 0:00:53.854 ******* 2026-02-20 01:59:13.603214 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:59:13.603223 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:59:13.603233 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:59:13.603242 | orchestrator | ok: [testbed-manager] 2026-02-20 01:59:13.603252 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:59:13.603262 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:59:13.603271 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:59:13.603280 | orchestrator | 2026-02-20 01:59:13.603290 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-20 01:59:13.603300 | orchestrator | Friday 20 February 2026 01:58:56 +0000 (0:00:01.976) 0:00:55.830 ******* 2026-02-20 01:59:13.603309 | orchestrator | changed: [testbed-manager] 2026-02-20 01:59:13.603319 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:59:13.603329 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:59:13.603338 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:59:13.603347 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:59:13.603357 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:59:13.603366 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:59:13.603382 | orchestrator | 2026-02-20 01:59:13.603392 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-20 01:59:13.603402 | orchestrator | Friday 20 February 2026 01:58:57 +0000 (0:00:01.229) 0:00:57.059 ******* 2026-02-20 01:59:13.603411 | orchestrator | changed: [testbed-node-3] 2026-02-20 01:59:13.603421 | orchestrator | changed: [testbed-node-2] 2026-02-20 01:59:13.603430 | orchestrator | changed: [testbed-node-5] 2026-02-20 01:59:13.603440 | orchestrator | changed: [testbed-node-4] 2026-02-20 01:59:13.603449 | orchestrator | changed: [testbed-node-0] 2026-02-20 01:59:13.603525 | orchestrator | changed: [testbed-node-1] 2026-02-20 01:59:13.603536 | orchestrator | changed: [testbed-manager] 2026-02-20 01:59:13.603546 | orchestrator | 2026-02-20 01:59:13.603556 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-20 01:59:13.603565 | orchestrator | Friday 20 February 2026 01:59:10 +0000 (0:00:13.040) 0:01:10.100 ******* 2026-02-20 01:59:13.603575 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:59:13.603584 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:59:13.603594 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:59:13.603603 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:59:13.603612 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:59:13.603622 | orchestrator | ok: [testbed-manager] 2026-02-20 01:59:13.603631 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:59:13.603640 | orchestrator | 2026-02-20 01:59:13.603650 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-20 01:59:13.603660 | orchestrator | Friday 20 February 2026 01:59:11 +0000 (0:00:00.886) 0:01:10.987 ******* 2026-02-20 01:59:13.603669 | orchestrator | ok: [testbed-manager] 2026-02-20 01:59:13.603679 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:59:13.603688 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:59:13.603703 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:59:13.603721 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:59:13.603744 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:59:13.603767 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:59:13.603784 | orchestrator | 2026-02-20 01:59:13.603801 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-20 01:59:13.603819 | orchestrator | Friday 20 February 2026 01:59:12 +0000 (0:00:01.003) 0:01:11.990 ******* 2026-02-20 01:59:13.603835 | orchestrator | ok: [testbed-manager] 2026-02-20 01:59:13.603850 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:59:13.603866 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:59:13.603883 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:59:13.603900 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:59:13.603917 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:59:13.603933 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:59:13.603950 | orchestrator | 2026-02-20 01:59:13.603968 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-20 01:59:13.603985 | orchestrator | Friday 20 February 2026 01:59:12 +0000 (0:00:00.275) 0:01:12.266 ******* 2026-02-20 01:59:13.604002 | orchestrator | ok: [testbed-manager] 2026-02-20 01:59:13.604020 | orchestrator | ok: [testbed-node-3] 2026-02-20 01:59:13.604038 | orchestrator | ok: [testbed-node-4] 2026-02-20 01:59:13.604055 | orchestrator | ok: [testbed-node-5] 2026-02-20 01:59:13.604073 | orchestrator | ok: [testbed-node-0] 2026-02-20 01:59:13.604090 | orchestrator | ok: [testbed-node-1] 2026-02-20 01:59:13.604106 | orchestrator | ok: [testbed-node-2] 2026-02-20 01:59:13.604123 | orchestrator | 2026-02-20 01:59:13.604151 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-20 01:59:13.604169 | orchestrator | Friday 20 February 2026 01:59:13 +0000 (0:00:00.266) 0:01:12.533 ******* 2026-02-20 01:59:13.604189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 01:59:13.604211 | orchestrator | 2026-02-20 01:59:13.604243 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-20 02:01:42.314536 | orchestrator | Friday 20 February 2026 01:59:13 +0000 (0:00:00.334) 0:01:12.867 ******* 2026-02-20 02:01:42.314615 | orchestrator | ok: [testbed-manager] 2026-02-20 02:01:42.314623 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:01:42.314627 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:01:42.314632 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:01:42.314636 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:01:42.314640 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:01:42.314644 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:01:42.314649 | orchestrator | 2026-02-20 02:01:42.314654 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-20 02:01:42.314658 | orchestrator | Friday 20 February 2026 01:59:15 +0000 (0:00:02.215) 0:01:15.083 ******* 2026-02-20 02:01:42.314662 | orchestrator | changed: [testbed-manager] 2026-02-20 02:01:42.314668 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:01:42.314672 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:01:42.314676 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:01:42.314680 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:01:42.314684 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:01:42.314688 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:01:42.314692 | orchestrator | 2026-02-20 02:01:42.314696 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-20 02:01:42.314701 | orchestrator | Friday 20 February 2026 01:59:16 +0000 (0:00:00.654) 0:01:15.737 ******* 2026-02-20 02:01:42.314705 | orchestrator | ok: [testbed-manager] 2026-02-20 02:01:42.314709 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:01:42.314713 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:01:42.314717 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:01:42.314721 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:01:42.314725 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:01:42.314729 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:01:42.314733 | orchestrator | 2026-02-20 02:01:42.314738 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-20 02:01:42.314743 | orchestrator | Friday 20 February 2026 01:59:16 +0000 (0:00:00.254) 0:01:15.992 ******* 2026-02-20 02:01:42.314747 | orchestrator | ok: [testbed-manager] 2026-02-20 02:01:42.314751 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:01:42.314755 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:01:42.314759 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:01:42.314763 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:01:42.314767 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:01:42.314771 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:01:42.314775 | orchestrator | 2026-02-20 02:01:42.314779 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-20 02:01:42.314783 | orchestrator | Friday 20 February 2026 01:59:18 +0000 (0:00:01.489) 0:01:17.481 ******* 2026-02-20 02:01:42.314787 | orchestrator | changed: [testbed-manager] 2026-02-20 02:01:42.314791 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:01:42.314795 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:01:42.314800 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:01:42.314804 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:01:42.314808 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:01:42.314812 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:01:42.314816 | orchestrator | 2026-02-20 02:01:42.314823 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-20 02:01:42.314827 | orchestrator | Friday 20 February 2026 01:59:20 +0000 (0:00:02.327) 0:01:19.809 ******* 2026-02-20 02:01:42.314831 | orchestrator | ok: [testbed-manager] 2026-02-20 02:01:42.314836 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:01:42.314840 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:01:42.314844 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:01:42.314848 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:01:42.314852 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:01:42.314856 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:01:42.314860 | orchestrator | 2026-02-20 02:01:42.314864 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-20 02:01:42.314886 | orchestrator | Friday 20 February 2026 01:59:23 +0000 (0:00:03.153) 0:01:22.963 ******* 2026-02-20 02:01:42.314890 | orchestrator | ok: [testbed-manager] 2026-02-20 02:01:42.314895 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:01:42.314899 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:01:42.314903 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:01:42.314906 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:01:42.314910 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:01:42.314914 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:01:42.314918 | orchestrator | 2026-02-20 02:01:42.314923 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-20 02:01:42.314927 | orchestrator | Friday 20 February 2026 02:00:04 +0000 (0:00:40.859) 0:02:03.822 ******* 2026-02-20 02:01:42.314931 | orchestrator | changed: [testbed-manager] 2026-02-20 02:01:42.314935 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:01:42.314939 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:01:42.314943 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:01:42.314947 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:01:42.314951 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:01:42.314955 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:01:42.314959 | orchestrator | 2026-02-20 02:01:42.314963 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-20 02:01:42.314967 | orchestrator | Friday 20 February 2026 02:01:25 +0000 (0:01:20.796) 0:03:24.619 ******* 2026-02-20 02:01:42.314971 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:01:42.314975 | orchestrator | ok: [testbed-manager] 2026-02-20 02:01:42.314979 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:01:42.314983 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:01:42.314987 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:01:42.314991 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:01:42.314995 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:01:42.314999 | orchestrator | 2026-02-20 02:01:42.315003 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-20 02:01:42.315008 | orchestrator | Friday 20 February 2026 02:01:27 +0000 (0:00:02.209) 0:03:26.829 ******* 2026-02-20 02:01:42.315012 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:01:42.315016 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:01:42.315020 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:01:42.315024 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:01:42.315028 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:01:42.315032 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:01:42.315036 | orchestrator | changed: [testbed-manager] 2026-02-20 02:01:42.315040 | orchestrator | 2026-02-20 02:01:42.315044 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-20 02:01:42.315048 | orchestrator | Friday 20 February 2026 02:01:40 +0000 (0:00:13.247) 0:03:40.076 ******* 2026-02-20 02:01:42.315073 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-20 02:01:42.315091 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-20 02:01:42.315102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-20 02:01:42.315108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-20 02:01:42.315114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-20 02:01:42.315119 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-20 02:01:42.315124 | orchestrator | 2026-02-20 02:01:42.315128 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-20 02:01:42.315134 | orchestrator | Friday 20 February 2026 02:01:41 +0000 (0:00:00.467) 0:03:40.544 ******* 2026-02-20 02:01:42.315139 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-20 02:01:42.315144 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-20 02:01:42.315148 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:01:42.315153 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:01:42.315158 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-20 02:01:42.315163 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-20 02:01:42.315168 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:01:42.315173 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:01:42.315177 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-20 02:01:42.315182 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-20 02:01:42.315187 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-20 02:01:42.315192 | orchestrator | 2026-02-20 02:01:42.315197 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-20 02:01:42.315202 | orchestrator | Friday 20 February 2026 02:01:42 +0000 (0:00:00.934) 0:03:41.478 ******* 2026-02-20 02:01:42.315209 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-20 02:01:42.315215 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-20 02:01:42.315220 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-20 02:01:42.315225 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-20 02:01:42.315229 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-20 02:01:42.315237 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-20 02:01:50.427319 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-20 02:01:50.427395 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-20 02:01:50.427418 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-20 02:01:50.427422 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-20 02:01:50.427426 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-20 02:01:50.427430 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-20 02:01:50.427434 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-20 02:01:50.427438 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-20 02:01:50.427442 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-20 02:01:50.427446 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-20 02:01:50.427450 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-20 02:01:50.427454 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-20 02:01:50.427458 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-20 02:01:50.427462 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-20 02:01:50.427466 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:01:50.427470 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-20 02:01:50.427474 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:01:50.427478 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-20 02:01:50.427482 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-20 02:01:50.427485 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-20 02:01:50.427489 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-20 02:01:50.427493 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-20 02:01:50.427497 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-20 02:01:50.427500 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-20 02:01:50.427504 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-20 02:01:50.427508 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-20 02:01:50.427511 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-20 02:01:50.427515 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-20 02:01:50.427537 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-20 02:01:50.427541 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-20 02:01:50.427545 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-20 02:01:50.427549 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-20 02:01:50.427552 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-20 02:01:50.427556 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-20 02:01:50.427560 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:01:50.427564 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-20 02:01:50.427571 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-20 02:01:50.427575 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:01:50.427590 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-20 02:01:50.427594 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-20 02:01:50.427597 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-20 02:01:50.427601 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-20 02:01:50.427605 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-20 02:01:50.427618 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-20 02:01:50.427622 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-20 02:01:50.427626 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-20 02:01:50.427630 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-20 02:01:50.427634 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-20 02:01:50.427637 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-20 02:01:50.427641 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-20 02:01:50.427644 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-20 02:01:50.427648 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-20 02:01:50.427652 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-20 02:01:50.427656 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-20 02:01:50.427659 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-20 02:01:50.427663 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-20 02:01:50.427667 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-20 02:01:50.427670 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-20 02:01:50.427674 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-20 02:01:50.427678 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-20 02:01:50.427682 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-20 02:01:50.427688 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-20 02:01:50.427694 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-20 02:01:50.427700 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-20 02:01:50.427706 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-20 02:01:50.427713 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-20 02:01:50.427719 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-20 02:01:50.427727 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-20 02:01:50.427739 | orchestrator | 2026-02-20 02:01:50.427746 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-20 02:01:50.427753 | orchestrator | Friday 20 February 2026 02:01:49 +0000 (0:00:07.069) 0:03:48.548 ******* 2026-02-20 02:01:50.427759 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-20 02:01:50.427767 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-20 02:01:50.427771 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-20 02:01:50.427775 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-20 02:01:50.427779 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-20 02:01:50.427782 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-20 02:01:50.427786 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-20 02:01:50.427790 | orchestrator | 2026-02-20 02:01:50.427794 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-20 02:01:50.427797 | orchestrator | Friday 20 February 2026 02:01:49 +0000 (0:00:00.615) 0:03:49.164 ******* 2026-02-20 02:01:50.427801 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-20 02:01:50.427805 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:01:50.427808 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-20 02:01:50.427816 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-20 02:01:50.427819 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:01:50.427823 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:01:50.427827 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-20 02:01:50.427831 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:01:50.427834 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-20 02:01:50.427838 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-20 02:01:50.427845 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-20 02:02:06.432792 | orchestrator | 2026-02-20 02:02:06.433819 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-20 02:02:06.433872 | orchestrator | Friday 20 February 2026 02:01:50 +0000 (0:00:00.528) 0:03:49.692 ******* 2026-02-20 02:02:06.433884 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-20 02:02:06.433896 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:02:06.433908 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-20 02:02:06.433919 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:02:06.433930 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-20 02:02:06.433940 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-20 02:02:06.433951 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:02:06.433962 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:02:06.433974 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-20 02:02:06.433985 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-20 02:02:06.433996 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-20 02:02:06.434008 | orchestrator | 2026-02-20 02:02:06.434074 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-20 02:02:06.434116 | orchestrator | Friday 20 February 2026 02:01:52 +0000 (0:00:02.576) 0:03:52.268 ******* 2026-02-20 02:02:06.434128 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-20 02:02:06.434138 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:02:06.434150 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-20 02:02:06.434161 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:02:06.434171 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-20 02:02:06.434181 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:02:06.434191 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-20 02:02:06.434202 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:02:06.434213 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-20 02:02:06.434224 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-20 02:02:06.434235 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-20 02:02:06.434246 | orchestrator | 2026-02-20 02:02:06.434258 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-20 02:02:06.434268 | orchestrator | Friday 20 February 2026 02:01:53 +0000 (0:00:00.667) 0:03:52.935 ******* 2026-02-20 02:02:06.434280 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:02:06.434291 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:02:06.434302 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:02:06.434313 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:02:06.434324 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:02:06.434334 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:02:06.434345 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:02:06.434356 | orchestrator | 2026-02-20 02:02:06.434368 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-20 02:02:06.434378 | orchestrator | Friday 20 February 2026 02:01:54 +0000 (0:00:00.387) 0:03:53.322 ******* 2026-02-20 02:02:06.434390 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:02:06.434402 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:02:06.434412 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:02:06.434424 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:02:06.434434 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:02:06.434445 | orchestrator | ok: [testbed-manager] 2026-02-20 02:02:06.434456 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:02:06.434465 | orchestrator | 2026-02-20 02:02:06.434476 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-20 02:02:06.434486 | orchestrator | Friday 20 February 2026 02:01:59 +0000 (0:00:05.442) 0:03:58.765 ******* 2026-02-20 02:02:06.434496 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-20 02:02:06.434508 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-20 02:02:06.434518 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:02:06.434548 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-20 02:02:06.434559 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:02:06.434569 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-20 02:02:06.434580 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:02:06.434590 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-20 02:02:06.434600 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:02:06.434610 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-20 02:02:06.434638 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:02:06.434649 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:02:06.434660 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-20 02:02:06.434670 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:02:06.434692 | orchestrator | 2026-02-20 02:02:06.434701 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-20 02:02:06.434710 | orchestrator | Friday 20 February 2026 02:01:59 +0000 (0:00:00.338) 0:03:59.103 ******* 2026-02-20 02:02:06.434720 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-20 02:02:06.434729 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-20 02:02:06.434738 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-20 02:02:06.434771 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-20 02:02:06.434781 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-20 02:02:06.434792 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-20 02:02:06.434802 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-20 02:02:06.434811 | orchestrator | 2026-02-20 02:02:06.434822 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-20 02:02:06.434833 | orchestrator | Friday 20 February 2026 02:02:01 +0000 (0:00:01.314) 0:04:00.417 ******* 2026-02-20 02:02:06.434845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:02:06.434859 | orchestrator | 2026-02-20 02:02:06.434869 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-20 02:02:06.434879 | orchestrator | Friday 20 February 2026 02:02:01 +0000 (0:00:00.475) 0:04:00.893 ******* 2026-02-20 02:02:06.434890 | orchestrator | ok: [testbed-manager] 2026-02-20 02:02:06.434901 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:02:06.434911 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:02:06.434921 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:02:06.434931 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:02:06.434942 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:02:06.434951 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:02:06.434962 | orchestrator | 2026-02-20 02:02:06.434971 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-20 02:02:06.434981 | orchestrator | Friday 20 February 2026 02:02:03 +0000 (0:00:01.559) 0:04:02.453 ******* 2026-02-20 02:02:06.434991 | orchestrator | ok: [testbed-manager] 2026-02-20 02:02:06.435002 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:02:06.435122 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:02:06.435138 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:02:06.435148 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:02:06.435159 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:02:06.435169 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:02:06.435179 | orchestrator | 2026-02-20 02:02:06.435190 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-20 02:02:06.435201 | orchestrator | Friday 20 February 2026 02:02:03 +0000 (0:00:00.700) 0:04:03.153 ******* 2026-02-20 02:02:06.435211 | orchestrator | changed: [testbed-manager] 2026-02-20 02:02:06.435221 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:02:06.435231 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:02:06.435242 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:02:06.435252 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:02:06.435261 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:02:06.435272 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:02:06.435282 | orchestrator | 2026-02-20 02:02:06.435291 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-20 02:02:06.435310 | orchestrator | Friday 20 February 2026 02:02:04 +0000 (0:00:00.708) 0:04:03.862 ******* 2026-02-20 02:02:06.435319 | orchestrator | ok: [testbed-manager] 2026-02-20 02:02:06.435327 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:02:06.435338 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:02:06.435347 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:02:06.435356 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:02:06.435366 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:02:06.435376 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:02:06.435386 | orchestrator | 2026-02-20 02:02:06.435395 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-20 02:02:06.435418 | orchestrator | Friday 20 February 2026 02:02:05 +0000 (0:00:00.639) 0:04:04.502 ******* 2026-02-20 02:02:06.435433 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771551437.5696573, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 02:02:06.435445 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771551458.0688486, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 02:02:06.435464 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771551464.6504543, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 02:02:06.435501 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771551462.6876707, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 02:02:11.761129 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771551465.1810303, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 02:02:11.761221 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771551457.9769676, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 02:02:11.761232 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771551464.8478172, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 02:02:11.761262 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 02:02:11.761270 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 02:02:11.761291 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 02:02:11.761299 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 02:02:11.761321 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 02:02:11.761330 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 02:02:11.761337 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 02:02:11.761350 | orchestrator | 2026-02-20 02:02:11.761359 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-20 02:02:11.761368 | orchestrator | Friday 20 February 2026 02:02:06 +0000 (0:00:01.188) 0:04:05.691 ******* 2026-02-20 02:02:11.761376 | orchestrator | changed: [testbed-manager] 2026-02-20 02:02:11.761384 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:02:11.761391 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:02:11.761398 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:02:11.761406 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:02:11.761413 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:02:11.761420 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:02:11.761427 | orchestrator | 2026-02-20 02:02:11.761434 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-20 02:02:11.761441 | orchestrator | Friday 20 February 2026 02:02:07 +0000 (0:00:01.278) 0:04:06.970 ******* 2026-02-20 02:02:11.761449 | orchestrator | changed: [testbed-manager] 2026-02-20 02:02:11.761456 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:02:11.761463 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:02:11.761470 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:02:11.761477 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:02:11.761486 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:02:11.761498 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:02:11.761510 | orchestrator | 2026-02-20 02:02:11.761561 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-20 02:02:11.761574 | orchestrator | Friday 20 February 2026 02:02:08 +0000 (0:00:01.224) 0:04:08.194 ******* 2026-02-20 02:02:11.761585 | orchestrator | changed: [testbed-manager] 2026-02-20 02:02:11.761597 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:02:11.761608 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:02:11.761619 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:02:11.761630 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:02:11.761641 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:02:11.761653 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:02:11.761664 | orchestrator | 2026-02-20 02:02:11.761677 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-20 02:02:11.761691 | orchestrator | Friday 20 February 2026 02:02:10 +0000 (0:00:01.232) 0:04:09.427 ******* 2026-02-20 02:02:11.761704 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:02:11.761716 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:02:11.761736 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:02:11.761747 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:02:11.761760 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:02:11.761781 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:02:11.761793 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:02:11.761805 | orchestrator | 2026-02-20 02:02:11.761816 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-20 02:02:11.761827 | orchestrator | Friday 20 February 2026 02:02:10 +0000 (0:00:00.351) 0:04:09.778 ******* 2026-02-20 02:02:11.761837 | orchestrator | ok: [testbed-manager] 2026-02-20 02:02:11.761850 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:02:11.761862 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:02:11.761876 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:02:11.761888 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:02:11.761899 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:02:11.761910 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:02:11.761923 | orchestrator | 2026-02-20 02:02:11.761934 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-20 02:02:11.761946 | orchestrator | Friday 20 February 2026 02:02:11 +0000 (0:00:00.801) 0:04:10.580 ******* 2026-02-20 02:02:11.761962 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:02:11.761988 | orchestrator | 2026-02-20 02:02:11.761996 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-20 02:02:11.762067 | orchestrator | Friday 20 February 2026 02:02:11 +0000 (0:00:00.446) 0:04:11.027 ******* 2026-02-20 02:03:30.822871 | orchestrator | ok: [testbed-manager] 2026-02-20 02:03:30.822971 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:03:30.822983 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:03:30.822990 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:03:30.822996 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:03:30.823003 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:03:30.823009 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:03:30.823015 | orchestrator | 2026-02-20 02:03:30.823023 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-20 02:03:30.823030 | orchestrator | Friday 20 February 2026 02:02:20 +0000 (0:00:08.923) 0:04:19.950 ******* 2026-02-20 02:03:30.823036 | orchestrator | ok: [testbed-manager] 2026-02-20 02:03:30.823042 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:03:30.823049 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:03:30.823055 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:03:30.823061 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:03:30.823067 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:03:30.823072 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:03:30.823078 | orchestrator | 2026-02-20 02:03:30.823084 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-20 02:03:30.823089 | orchestrator | Friday 20 February 2026 02:02:22 +0000 (0:00:01.559) 0:04:21.510 ******* 2026-02-20 02:03:30.823095 | orchestrator | ok: [testbed-manager] 2026-02-20 02:03:30.823100 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:03:30.823105 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:03:30.823111 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:03:30.823116 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:03:30.823122 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:03:30.823128 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:03:30.823133 | orchestrator | 2026-02-20 02:03:30.823140 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-20 02:03:30.823146 | orchestrator | Friday 20 February 2026 02:02:23 +0000 (0:00:01.250) 0:04:22.760 ******* 2026-02-20 02:03:30.823152 | orchestrator | ok: [testbed-manager] 2026-02-20 02:03:30.823159 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:03:30.823165 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:03:30.823172 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:03:30.823179 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:03:30.823186 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:03:30.823192 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:03:30.823198 | orchestrator | 2026-02-20 02:03:30.823204 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-20 02:03:30.823212 | orchestrator | Friday 20 February 2026 02:02:23 +0000 (0:00:00.306) 0:04:23.067 ******* 2026-02-20 02:03:30.823218 | orchestrator | ok: [testbed-manager] 2026-02-20 02:03:30.823224 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:03:30.823230 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:03:30.823236 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:03:30.823241 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:03:30.823247 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:03:30.823253 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:03:30.823259 | orchestrator | 2026-02-20 02:03:30.823265 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-20 02:03:30.823271 | orchestrator | Friday 20 February 2026 02:02:24 +0000 (0:00:00.368) 0:04:23.435 ******* 2026-02-20 02:03:30.823277 | orchestrator | ok: [testbed-manager] 2026-02-20 02:03:30.823283 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:03:30.823290 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:03:30.823315 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:03:30.823319 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:03:30.823322 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:03:30.823326 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:03:30.823330 | orchestrator | 2026-02-20 02:03:30.823334 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-20 02:03:30.823337 | orchestrator | Friday 20 February 2026 02:02:24 +0000 (0:00:00.339) 0:04:23.774 ******* 2026-02-20 02:03:30.823341 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:03:30.823345 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:03:30.823351 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:03:30.823356 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:03:30.823362 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:03:30.823367 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:03:30.823372 | orchestrator | ok: [testbed-manager] 2026-02-20 02:03:30.823377 | orchestrator | 2026-02-20 02:03:30.823382 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-20 02:03:30.823387 | orchestrator | Friday 20 February 2026 02:02:28 +0000 (0:00:04.488) 0:04:28.263 ******* 2026-02-20 02:03:30.823399 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:03:30.823410 | orchestrator | 2026-02-20 02:03:30.823416 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-20 02:03:30.823422 | orchestrator | Friday 20 February 2026 02:02:29 +0000 (0:00:00.614) 0:04:28.877 ******* 2026-02-20 02:03:30.823428 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-20 02:03:30.823434 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-20 02:03:30.823440 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-20 02:03:30.823446 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-20 02:03:30.823451 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:03:30.823473 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-20 02:03:30.823480 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-20 02:03:30.823486 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:03:30.823492 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-20 02:03:30.823499 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-20 02:03:30.823505 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:03:30.823511 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-20 02:03:30.823517 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-20 02:03:30.823524 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:03:30.823534 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-20 02:03:30.823542 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:03:30.823642 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-20 02:03:30.823648 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:03:30.823652 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-20 02:03:30.823656 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-20 02:03:30.823660 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:03:30.823664 | orchestrator | 2026-02-20 02:03:30.823667 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-20 02:03:30.823671 | orchestrator | Friday 20 February 2026 02:02:29 +0000 (0:00:00.389) 0:04:29.267 ******* 2026-02-20 02:03:30.823676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:03:30.823680 | orchestrator | 2026-02-20 02:03:30.823684 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-20 02:03:30.823696 | orchestrator | Friday 20 February 2026 02:02:30 +0000 (0:00:00.502) 0:04:29.769 ******* 2026-02-20 02:03:30.823700 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-20 02:03:30.823704 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:03:30.823708 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-20 02:03:30.823712 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:03:30.823715 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-20 02:03:30.823719 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:03:30.823723 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-20 02:03:30.823727 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-20 02:03:30.823730 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:03:30.823734 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-20 02:03:30.823738 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:03:30.823741 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:03:30.823745 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-20 02:03:30.823749 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:03:30.823752 | orchestrator | 2026-02-20 02:03:30.823756 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-20 02:03:30.823760 | orchestrator | Friday 20 February 2026 02:02:30 +0000 (0:00:00.344) 0:04:30.113 ******* 2026-02-20 02:03:30.823764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:03:30.823768 | orchestrator | 2026-02-20 02:03:30.823771 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-20 02:03:30.823775 | orchestrator | Friday 20 February 2026 02:02:31 +0000 (0:00:00.499) 0:04:30.613 ******* 2026-02-20 02:03:30.823779 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:03:30.823782 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:03:30.823786 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:03:30.823790 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:03:30.823793 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:03:30.823797 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:03:30.823801 | orchestrator | changed: [testbed-manager] 2026-02-20 02:03:30.823804 | orchestrator | 2026-02-20 02:03:30.823808 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-20 02:03:30.823812 | orchestrator | Friday 20 February 2026 02:03:01 +0000 (0:00:30.381) 0:05:00.994 ******* 2026-02-20 02:03:30.823815 | orchestrator | changed: [testbed-manager] 2026-02-20 02:03:30.823819 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:03:30.823823 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:03:30.823829 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:03:30.823836 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:03:30.823842 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:03:30.823852 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:03:30.823859 | orchestrator | 2026-02-20 02:03:30.823866 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-20 02:03:30.823885 | orchestrator | Friday 20 February 2026 02:03:10 +0000 (0:00:09.162) 0:05:10.157 ******* 2026-02-20 02:03:30.823891 | orchestrator | changed: [testbed-manager] 2026-02-20 02:03:30.823897 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:03:30.823903 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:03:30.823909 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:03:30.823915 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:03:30.823922 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:03:30.823930 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:03:30.823937 | orchestrator | 2026-02-20 02:03:30.823944 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-20 02:03:30.823958 | orchestrator | Friday 20 February 2026 02:03:20 +0000 (0:00:09.601) 0:05:19.758 ******* 2026-02-20 02:03:30.823965 | orchestrator | ok: [testbed-manager] 2026-02-20 02:03:30.823971 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:03:30.823975 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:03:30.823979 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:03:30.823983 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:03:30.823986 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:03:30.823990 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:03:30.823994 | orchestrator | 2026-02-20 02:03:30.823997 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-20 02:03:30.824002 | orchestrator | Friday 20 February 2026 02:03:23 +0000 (0:00:02.659) 0:05:22.418 ******* 2026-02-20 02:03:30.824005 | orchestrator | changed: [testbed-manager] 2026-02-20 02:03:30.824009 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:03:30.824013 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:03:30.824016 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:03:30.824020 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:03:30.824024 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:03:30.824027 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:03:30.824031 | orchestrator | 2026-02-20 02:03:30.824039 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-20 02:03:43.999467 | orchestrator | Friday 20 February 2026 02:03:30 +0000 (0:00:07.662) 0:05:30.081 ******* 2026-02-20 02:03:43.999551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:03:43.999599 | orchestrator | 2026-02-20 02:03:43.999606 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-20 02:03:43.999611 | orchestrator | Friday 20 February 2026 02:03:31 +0000 (0:00:00.480) 0:05:30.561 ******* 2026-02-20 02:03:43.999616 | orchestrator | changed: [testbed-manager] 2026-02-20 02:03:43.999622 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:03:43.999626 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:03:43.999631 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:03:43.999635 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:03:43.999640 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:03:43.999644 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:03:43.999649 | orchestrator | 2026-02-20 02:03:43.999654 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-20 02:03:43.999659 | orchestrator | Friday 20 February 2026 02:03:32 +0000 (0:00:00.856) 0:05:31.418 ******* 2026-02-20 02:03:43.999664 | orchestrator | ok: [testbed-manager] 2026-02-20 02:03:43.999669 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:03:43.999674 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:03:43.999678 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:03:43.999683 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:03:43.999687 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:03:43.999692 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:03:43.999696 | orchestrator | 2026-02-20 02:03:43.999701 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-20 02:03:43.999706 | orchestrator | Friday 20 February 2026 02:03:34 +0000 (0:00:02.142) 0:05:33.561 ******* 2026-02-20 02:03:43.999710 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:03:43.999715 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:03:43.999719 | orchestrator | changed: [testbed-manager] 2026-02-20 02:03:43.999724 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:03:43.999728 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:03:43.999733 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:03:43.999738 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:03:43.999743 | orchestrator | 2026-02-20 02:03:43.999747 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-20 02:03:43.999752 | orchestrator | Friday 20 February 2026 02:03:35 +0000 (0:00:00.901) 0:05:34.462 ******* 2026-02-20 02:03:43.999774 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:03:43.999779 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:03:43.999783 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:03:43.999789 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:03:43.999797 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:03:43.999804 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:03:43.999812 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:03:43.999818 | orchestrator | 2026-02-20 02:03:43.999828 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-20 02:03:43.999839 | orchestrator | Friday 20 February 2026 02:03:35 +0000 (0:00:00.339) 0:05:34.802 ******* 2026-02-20 02:03:43.999846 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:03:43.999853 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:03:43.999861 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:03:43.999869 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:03:43.999877 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:03:43.999885 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:03:43.999892 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:03:43.999900 | orchestrator | 2026-02-20 02:03:43.999923 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-20 02:03:43.999937 | orchestrator | Friday 20 February 2026 02:03:35 +0000 (0:00:00.465) 0:05:35.267 ******* 2026-02-20 02:03:43.999945 | orchestrator | ok: [testbed-manager] 2026-02-20 02:03:43.999952 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:03:43.999960 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:03:43.999967 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:03:43.999975 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:03:43.999982 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:03:43.999989 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:03:43.999997 | orchestrator | 2026-02-20 02:03:44.000005 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-20 02:03:44.000029 | orchestrator | Friday 20 February 2026 02:03:36 +0000 (0:00:00.358) 0:05:35.625 ******* 2026-02-20 02:03:44.000038 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:03:44.000046 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:03:44.000054 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:03:44.000062 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:03:44.000070 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:03:44.000078 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:03:44.000085 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:03:44.000093 | orchestrator | 2026-02-20 02:03:44.000101 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-20 02:03:44.000110 | orchestrator | Friday 20 February 2026 02:03:36 +0000 (0:00:00.316) 0:05:35.942 ******* 2026-02-20 02:03:44.000118 | orchestrator | ok: [testbed-manager] 2026-02-20 02:03:44.000126 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:03:44.000134 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:03:44.000142 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:03:44.000149 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:03:44.000157 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:03:44.000164 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:03:44.000172 | orchestrator | 2026-02-20 02:03:44.000179 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-20 02:03:44.000187 | orchestrator | Friday 20 February 2026 02:03:37 +0000 (0:00:00.382) 0:05:36.324 ******* 2026-02-20 02:03:44.000195 | orchestrator | ok: [testbed-manager] =>  2026-02-20 02:03:44.000203 | orchestrator |  docker_version: 5:27.5.1 2026-02-20 02:03:44.000210 | orchestrator | ok: [testbed-node-3] =>  2026-02-20 02:03:44.000218 | orchestrator |  docker_version: 5:27.5.1 2026-02-20 02:03:44.000225 | orchestrator | ok: [testbed-node-4] =>  2026-02-20 02:03:44.000233 | orchestrator |  docker_version: 5:27.5.1 2026-02-20 02:03:44.000241 | orchestrator | ok: [testbed-node-5] =>  2026-02-20 02:03:44.000248 | orchestrator |  docker_version: 5:27.5.1 2026-02-20 02:03:44.000281 | orchestrator | ok: [testbed-node-0] =>  2026-02-20 02:03:44.000291 | orchestrator |  docker_version: 5:27.5.1 2026-02-20 02:03:44.000299 | orchestrator | ok: [testbed-node-1] =>  2026-02-20 02:03:44.000306 | orchestrator |  docker_version: 5:27.5.1 2026-02-20 02:03:44.000314 | orchestrator | ok: [testbed-node-2] =>  2026-02-20 02:03:44.000321 | orchestrator |  docker_version: 5:27.5.1 2026-02-20 02:03:44.000329 | orchestrator | 2026-02-20 02:03:44.000336 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-20 02:03:44.000344 | orchestrator | Friday 20 February 2026 02:03:37 +0000 (0:00:00.285) 0:05:36.609 ******* 2026-02-20 02:03:44.000351 | orchestrator | ok: [testbed-manager] =>  2026-02-20 02:03:44.000357 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-20 02:03:44.000364 | orchestrator | ok: [testbed-node-3] =>  2026-02-20 02:03:44.000371 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-20 02:03:44.000377 | orchestrator | ok: [testbed-node-4] =>  2026-02-20 02:03:44.000384 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-20 02:03:44.000392 | orchestrator | ok: [testbed-node-5] =>  2026-02-20 02:03:44.000399 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-20 02:03:44.000407 | orchestrator | ok: [testbed-node-0] =>  2026-02-20 02:03:44.000414 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-20 02:03:44.000422 | orchestrator | ok: [testbed-node-1] =>  2026-02-20 02:03:44.000429 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-20 02:03:44.000437 | orchestrator | ok: [testbed-node-2] =>  2026-02-20 02:03:44.000445 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-20 02:03:44.000453 | orchestrator | 2026-02-20 02:03:44.000461 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-20 02:03:44.000468 | orchestrator | Friday 20 February 2026 02:03:37 +0000 (0:00:00.371) 0:05:36.981 ******* 2026-02-20 02:03:44.000476 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:03:44.000484 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:03:44.000491 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:03:44.000499 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:03:44.000506 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:03:44.000513 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:03:44.000521 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:03:44.000528 | orchestrator | 2026-02-20 02:03:44.000536 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-20 02:03:44.000543 | orchestrator | Friday 20 February 2026 02:03:37 +0000 (0:00:00.287) 0:05:37.268 ******* 2026-02-20 02:03:44.000551 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:03:44.000583 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:03:44.000591 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:03:44.000599 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:03:44.000606 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:03:44.000614 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:03:44.000622 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:03:44.000629 | orchestrator | 2026-02-20 02:03:44.000637 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-20 02:03:44.000645 | orchestrator | Friday 20 February 2026 02:03:38 +0000 (0:00:00.332) 0:05:37.601 ******* 2026-02-20 02:03:44.000655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:03:44.000665 | orchestrator | 2026-02-20 02:03:44.000673 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-20 02:03:44.000680 | orchestrator | Friday 20 February 2026 02:03:38 +0000 (0:00:00.511) 0:05:38.112 ******* 2026-02-20 02:03:44.000688 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:03:44.000696 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:03:44.000703 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:03:44.000711 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:03:44.000719 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:03:44.000735 | orchestrator | ok: [testbed-manager] 2026-02-20 02:03:44.000742 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:03:44.000750 | orchestrator | 2026-02-20 02:03:44.000757 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-20 02:03:44.000765 | orchestrator | Friday 20 February 2026 02:03:39 +0000 (0:00:01.149) 0:05:39.261 ******* 2026-02-20 02:03:44.000773 | orchestrator | ok: [testbed-manager] 2026-02-20 02:03:44.000780 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:03:44.000788 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:03:44.000795 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:03:44.000803 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:03:44.000816 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:03:44.000824 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:03:44.000831 | orchestrator | 2026-02-20 02:03:44.000839 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-20 02:03:44.000848 | orchestrator | Friday 20 February 2026 02:03:43 +0000 (0:00:03.550) 0:05:42.812 ******* 2026-02-20 02:03:44.000856 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-20 02:03:44.000864 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-20 02:03:44.000872 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-20 02:03:44.000880 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:03:44.000888 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-20 02:03:44.000896 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-20 02:03:44.000903 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-20 02:03:44.000911 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:03:44.000918 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-20 02:03:44.000926 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-20 02:03:44.000933 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-20 02:03:44.000941 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:03:44.000948 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-20 02:03:44.000956 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-20 02:03:44.000963 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-20 02:03:44.000970 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:03:44.000983 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-20 02:04:48.670816 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-20 02:04:48.670897 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-20 02:04:48.670905 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:04:48.670913 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-20 02:04:48.670919 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-20 02:04:48.670925 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-20 02:04:48.670932 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:04:48.670938 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-20 02:04:48.670944 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-20 02:04:48.670950 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-20 02:04:48.670955 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:04:48.670962 | orchestrator | 2026-02-20 02:04:48.670969 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-20 02:04:48.670976 | orchestrator | Friday 20 February 2026 02:03:44 +0000 (0:00:00.697) 0:05:43.509 ******* 2026-02-20 02:04:48.670982 | orchestrator | ok: [testbed-manager] 2026-02-20 02:04:48.670988 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:04:48.670994 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:04:48.671000 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:04:48.671006 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:04:48.671012 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:04:48.671037 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:04:48.671043 | orchestrator | 2026-02-20 02:04:48.671049 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-20 02:04:48.671055 | orchestrator | Friday 20 February 2026 02:03:52 +0000 (0:00:07.809) 0:05:51.319 ******* 2026-02-20 02:04:48.671061 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:04:48.671067 | orchestrator | ok: [testbed-manager] 2026-02-20 02:04:48.671073 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:04:48.671079 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:04:48.671085 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:04:48.671090 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:04:48.671096 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:04:48.671102 | orchestrator | 2026-02-20 02:04:48.671108 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-20 02:04:48.671114 | orchestrator | Friday 20 February 2026 02:03:53 +0000 (0:00:01.141) 0:05:52.460 ******* 2026-02-20 02:04:48.671120 | orchestrator | ok: [testbed-manager] 2026-02-20 02:04:48.671126 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:04:48.671131 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:04:48.671137 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:04:48.671143 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:04:48.671149 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:04:48.671155 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:04:48.671161 | orchestrator | 2026-02-20 02:04:48.671166 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-20 02:04:48.671172 | orchestrator | Friday 20 February 2026 02:04:01 +0000 (0:00:08.787) 0:06:01.248 ******* 2026-02-20 02:04:48.671178 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:04:48.671184 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:04:48.671190 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:04:48.671196 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:04:48.671202 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:04:48.671211 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:04:48.671219 | orchestrator | changed: [testbed-manager] 2026-02-20 02:04:48.671229 | orchestrator | 2026-02-20 02:04:48.671239 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-20 02:04:48.671248 | orchestrator | Friday 20 February 2026 02:04:04 +0000 (0:00:02.809) 0:06:04.058 ******* 2026-02-20 02:04:48.671258 | orchestrator | ok: [testbed-manager] 2026-02-20 02:04:48.671268 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:04:48.671276 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:04:48.671285 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:04:48.671294 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:04:48.671304 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:04:48.671314 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:04:48.671323 | orchestrator | 2026-02-20 02:04:48.671333 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-20 02:04:48.671343 | orchestrator | Friday 20 February 2026 02:04:06 +0000 (0:00:01.349) 0:06:05.407 ******* 2026-02-20 02:04:48.671352 | orchestrator | ok: [testbed-manager] 2026-02-20 02:04:48.671361 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:04:48.671371 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:04:48.671381 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:04:48.671391 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:04:48.671401 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:04:48.671412 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:04:48.671422 | orchestrator | 2026-02-20 02:04:48.671432 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-20 02:04:48.671444 | orchestrator | Friday 20 February 2026 02:04:07 +0000 (0:00:01.679) 0:06:07.087 ******* 2026-02-20 02:04:48.671455 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:04:48.671464 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:04:48.671475 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:04:48.671485 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:04:48.671504 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:04:48.671511 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:04:48.671517 | orchestrator | changed: [testbed-manager] 2026-02-20 02:04:48.671524 | orchestrator | 2026-02-20 02:04:48.671531 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-20 02:04:48.671538 | orchestrator | Friday 20 February 2026 02:04:08 +0000 (0:00:00.629) 0:06:07.717 ******* 2026-02-20 02:04:48.671545 | orchestrator | ok: [testbed-manager] 2026-02-20 02:04:48.671552 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:04:48.671558 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:04:48.671565 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:04:48.671596 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:04:48.671603 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:04:48.671610 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:04:48.671616 | orchestrator | 2026-02-20 02:04:48.671623 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-20 02:04:48.671644 | orchestrator | Friday 20 February 2026 02:04:19 +0000 (0:00:10.624) 0:06:18.341 ******* 2026-02-20 02:04:48.671651 | orchestrator | changed: [testbed-manager] 2026-02-20 02:04:48.671658 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:04:48.671664 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:04:48.671671 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:04:48.671677 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:04:48.671684 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:04:48.671690 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:04:48.671697 | orchestrator | 2026-02-20 02:04:48.671704 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-20 02:04:48.671711 | orchestrator | Friday 20 February 2026 02:04:20 +0000 (0:00:01.039) 0:06:19.381 ******* 2026-02-20 02:04:48.671717 | orchestrator | ok: [testbed-manager] 2026-02-20 02:04:48.671724 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:04:48.671730 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:04:48.671737 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:04:48.671744 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:04:48.671751 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:04:48.671758 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:04:48.671765 | orchestrator | 2026-02-20 02:04:48.671770 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-20 02:04:48.671776 | orchestrator | Friday 20 February 2026 02:04:29 +0000 (0:00:09.582) 0:06:28.963 ******* 2026-02-20 02:04:48.671782 | orchestrator | ok: [testbed-manager] 2026-02-20 02:04:48.671788 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:04:48.671794 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:04:48.671799 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:04:48.671805 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:04:48.671811 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:04:48.671817 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:04:48.671822 | orchestrator | 2026-02-20 02:04:48.671828 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-20 02:04:48.671834 | orchestrator | Friday 20 February 2026 02:04:41 +0000 (0:00:11.598) 0:06:40.561 ******* 2026-02-20 02:04:48.671840 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-20 02:04:48.671846 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-20 02:04:48.671852 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-20 02:04:48.671858 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-20 02:04:48.671864 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-20 02:04:48.671869 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-20 02:04:48.671875 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-20 02:04:48.671881 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-20 02:04:48.671887 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-20 02:04:48.671907 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-20 02:04:48.671912 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-20 02:04:48.671956 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-20 02:04:48.671964 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-20 02:04:48.671969 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-20 02:04:48.671975 | orchestrator | 2026-02-20 02:04:48.671981 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-20 02:04:48.671987 | orchestrator | Friday 20 February 2026 02:04:42 +0000 (0:00:01.368) 0:06:41.930 ******* 2026-02-20 02:04:48.671993 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:04:48.671999 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:04:48.672004 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:04:48.672010 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:04:48.672016 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:04:48.672022 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:04:48.672027 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:04:48.672033 | orchestrator | 2026-02-20 02:04:48.672039 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-20 02:04:48.672045 | orchestrator | Friday 20 February 2026 02:04:43 +0000 (0:00:00.581) 0:06:42.511 ******* 2026-02-20 02:04:48.672051 | orchestrator | ok: [testbed-manager] 2026-02-20 02:04:48.672057 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:04:48.672062 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:04:48.672068 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:04:48.672074 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:04:48.672080 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:04:48.672089 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:04:48.672095 | orchestrator | 2026-02-20 02:04:48.672101 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-20 02:04:48.672108 | orchestrator | Friday 20 February 2026 02:04:47 +0000 (0:00:04.290) 0:06:46.802 ******* 2026-02-20 02:04:48.672113 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:04:48.672119 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:04:48.672125 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:04:48.672130 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:04:48.672136 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:04:48.672144 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:04:48.672154 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:04:48.672164 | orchestrator | 2026-02-20 02:04:48.672176 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-20 02:04:48.672187 | orchestrator | Friday 20 February 2026 02:04:48 +0000 (0:00:00.566) 0:06:47.369 ******* 2026-02-20 02:04:48.672197 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-20 02:04:48.672208 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-20 02:04:48.672220 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:04:48.672231 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-20 02:04:48.672242 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-20 02:04:48.672248 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:04:48.672254 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-20 02:04:48.672260 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-20 02:04:48.672266 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:04:48.672278 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-20 02:05:10.259531 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-20 02:05:10.259627 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:05:10.259639 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-20 02:05:10.259646 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-20 02:05:10.259653 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:05:10.259685 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-20 02:05:10.259693 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-20 02:05:10.259699 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:05:10.259706 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-20 02:05:10.259713 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-20 02:05:10.259720 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:05:10.259724 | orchestrator | 2026-02-20 02:05:10.259730 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-20 02:05:10.259734 | orchestrator | Friday 20 February 2026 02:04:48 +0000 (0:00:00.876) 0:06:48.246 ******* 2026-02-20 02:05:10.259738 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:05:10.259742 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:05:10.259746 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:05:10.259750 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:05:10.259754 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:05:10.259757 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:05:10.259761 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:05:10.259765 | orchestrator | 2026-02-20 02:05:10.259769 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-20 02:05:10.259773 | orchestrator | Friday 20 February 2026 02:04:49 +0000 (0:00:00.579) 0:06:48.825 ******* 2026-02-20 02:05:10.259777 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:05:10.259780 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:05:10.259784 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:05:10.259788 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:05:10.259791 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:05:10.259795 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:05:10.259799 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:05:10.259802 | orchestrator | 2026-02-20 02:05:10.259806 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-20 02:05:10.259810 | orchestrator | Friday 20 February 2026 02:04:50 +0000 (0:00:00.555) 0:06:49.380 ******* 2026-02-20 02:05:10.259813 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:05:10.259817 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:05:10.259821 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:05:10.259825 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:05:10.259828 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:05:10.259832 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:05:10.259836 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:05:10.259839 | orchestrator | 2026-02-20 02:05:10.259843 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-20 02:05:10.259847 | orchestrator | Friday 20 February 2026 02:04:50 +0000 (0:00:00.639) 0:06:50.020 ******* 2026-02-20 02:05:10.259851 | orchestrator | ok: [testbed-manager] 2026-02-20 02:05:10.259855 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:05:10.259859 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:05:10.259862 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:05:10.259866 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:05:10.259870 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:05:10.259873 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:05:10.259877 | orchestrator | 2026-02-20 02:05:10.259881 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-20 02:05:10.259884 | orchestrator | Friday 20 February 2026 02:04:52 +0000 (0:00:02.159) 0:06:52.179 ******* 2026-02-20 02:05:10.259889 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:05:10.259895 | orchestrator | 2026-02-20 02:05:10.259898 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-20 02:05:10.259902 | orchestrator | Friday 20 February 2026 02:04:53 +0000 (0:00:00.982) 0:06:53.161 ******* 2026-02-20 02:05:10.259915 | orchestrator | ok: [testbed-manager] 2026-02-20 02:05:10.259920 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:05:10.259923 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:05:10.259927 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:05:10.259931 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:05:10.259935 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:05:10.259939 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:05:10.259942 | orchestrator | 2026-02-20 02:05:10.259946 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-20 02:05:10.259950 | orchestrator | Friday 20 February 2026 02:04:54 +0000 (0:00:00.928) 0:06:54.090 ******* 2026-02-20 02:05:10.259954 | orchestrator | ok: [testbed-manager] 2026-02-20 02:05:10.259957 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:05:10.259961 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:05:10.259965 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:05:10.260016 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:05:10.260020 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:05:10.260024 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:05:10.260027 | orchestrator | 2026-02-20 02:05:10.260031 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-20 02:05:10.260035 | orchestrator | Friday 20 February 2026 02:04:55 +0000 (0:00:00.936) 0:06:55.026 ******* 2026-02-20 02:05:10.260039 | orchestrator | ok: [testbed-manager] 2026-02-20 02:05:10.260043 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:05:10.260047 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:05:10.260050 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:05:10.260054 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:05:10.260058 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:05:10.260061 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:05:10.260065 | orchestrator | 2026-02-20 02:05:10.260069 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-20 02:05:10.260086 | orchestrator | Friday 20 February 2026 02:04:57 +0000 (0:00:01.764) 0:06:56.791 ******* 2026-02-20 02:05:10.260090 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:05:10.260093 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:05:10.260097 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:05:10.260101 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:05:10.260106 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:05:10.260111 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:05:10.260115 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:05:10.260119 | orchestrator | 2026-02-20 02:05:10.260124 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-20 02:05:10.260128 | orchestrator | Friday 20 February 2026 02:04:59 +0000 (0:00:01.508) 0:06:58.299 ******* 2026-02-20 02:05:10.260132 | orchestrator | ok: [testbed-manager] 2026-02-20 02:05:10.260137 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:05:10.260141 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:05:10.260146 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:05:10.260151 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:05:10.260155 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:05:10.260159 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:05:10.260163 | orchestrator | 2026-02-20 02:05:10.260168 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-20 02:05:10.260172 | orchestrator | Friday 20 February 2026 02:05:00 +0000 (0:00:01.469) 0:06:59.769 ******* 2026-02-20 02:05:10.260176 | orchestrator | changed: [testbed-manager] 2026-02-20 02:05:10.260181 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:05:10.260191 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:05:10.260196 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:05:10.260200 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:05:10.260205 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:05:10.260209 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:05:10.260213 | orchestrator | 2026-02-20 02:05:10.260222 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-20 02:05:10.260227 | orchestrator | Friday 20 February 2026 02:05:01 +0000 (0:00:01.500) 0:07:01.269 ******* 2026-02-20 02:05:10.260232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:05:10.260236 | orchestrator | 2026-02-20 02:05:10.260241 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-20 02:05:10.260248 | orchestrator | Friday 20 February 2026 02:05:03 +0000 (0:00:01.213) 0:07:02.483 ******* 2026-02-20 02:05:10.260254 | orchestrator | ok: [testbed-manager] 2026-02-20 02:05:10.260261 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:05:10.260267 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:05:10.260277 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:05:10.260284 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:05:10.260294 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:05:10.260301 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:05:10.260307 | orchestrator | 2026-02-20 02:05:10.260313 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-20 02:05:10.260320 | orchestrator | Friday 20 February 2026 02:05:04 +0000 (0:00:01.592) 0:07:04.075 ******* 2026-02-20 02:05:10.260326 | orchestrator | ok: [testbed-manager] 2026-02-20 02:05:10.260333 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:05:10.260341 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:05:10.260347 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:05:10.260353 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:05:10.260360 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:05:10.260366 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:05:10.260373 | orchestrator | 2026-02-20 02:05:10.260379 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-20 02:05:10.260386 | orchestrator | Friday 20 February 2026 02:05:06 +0000 (0:00:01.231) 0:07:05.307 ******* 2026-02-20 02:05:10.260393 | orchestrator | ok: [testbed-manager] 2026-02-20 02:05:10.260400 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:05:10.260407 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:05:10.260415 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:05:10.260421 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:05:10.260429 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:05:10.260436 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:05:10.260442 | orchestrator | 2026-02-20 02:05:10.260449 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-20 02:05:10.260456 | orchestrator | Friday 20 February 2026 02:05:07 +0000 (0:00:01.274) 0:07:06.581 ******* 2026-02-20 02:05:10.260463 | orchestrator | ok: [testbed-manager] 2026-02-20 02:05:10.260481 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:05:10.260485 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:05:10.260488 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:05:10.260492 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:05:10.260496 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:05:10.260499 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:05:10.260503 | orchestrator | 2026-02-20 02:05:10.260507 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-20 02:05:10.260510 | orchestrator | Friday 20 February 2026 02:05:08 +0000 (0:00:01.514) 0:07:08.095 ******* 2026-02-20 02:05:10.260514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:05:10.260518 | orchestrator | 2026-02-20 02:05:10.260522 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-20 02:05:10.260552 | orchestrator | Friday 20 February 2026 02:05:09 +0000 (0:00:01.053) 0:07:09.149 ******* 2026-02-20 02:05:10.260556 | orchestrator | 2026-02-20 02:05:10.260560 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-20 02:05:10.260652 | orchestrator | Friday 20 February 2026 02:05:09 +0000 (0:00:00.043) 0:07:09.192 ******* 2026-02-20 02:05:10.260676 | orchestrator | 2026-02-20 02:05:10.260680 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-20 02:05:10.260684 | orchestrator | Friday 20 February 2026 02:05:09 +0000 (0:00:00.063) 0:07:09.255 ******* 2026-02-20 02:05:10.260688 | orchestrator | 2026-02-20 02:05:10.260692 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-20 02:05:10.260704 | orchestrator | Friday 20 February 2026 02:05:10 +0000 (0:00:00.062) 0:07:09.318 ******* 2026-02-20 02:05:40.068892 | orchestrator | 2026-02-20 02:05:40.068953 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-20 02:05:40.068962 | orchestrator | Friday 20 February 2026 02:05:10 +0000 (0:00:00.051) 0:07:09.370 ******* 2026-02-20 02:05:40.068968 | orchestrator | 2026-02-20 02:05:40.068974 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-20 02:05:40.068980 | orchestrator | Friday 20 February 2026 02:05:10 +0000 (0:00:00.057) 0:07:09.427 ******* 2026-02-20 02:05:40.068986 | orchestrator | 2026-02-20 02:05:40.068992 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-20 02:05:40.068998 | orchestrator | Friday 20 February 2026 02:05:10 +0000 (0:00:00.047) 0:07:09.474 ******* 2026-02-20 02:05:40.069004 | orchestrator | 2026-02-20 02:05:40.069010 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-20 02:05:40.069016 | orchestrator | Friday 20 February 2026 02:05:10 +0000 (0:00:00.041) 0:07:09.516 ******* 2026-02-20 02:05:40.069021 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:05:40.069028 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:05:40.069034 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:05:40.069040 | orchestrator | 2026-02-20 02:05:40.069046 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-20 02:05:40.069052 | orchestrator | Friday 20 February 2026 02:05:11 +0000 (0:00:01.310) 0:07:10.827 ******* 2026-02-20 02:05:40.069058 | orchestrator | changed: [testbed-manager] 2026-02-20 02:05:40.069064 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:05:40.069070 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:05:40.069076 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:05:40.069082 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:05:40.069088 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:05:40.069093 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:05:40.069099 | orchestrator | 2026-02-20 02:05:40.069105 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-20 02:05:40.069111 | orchestrator | Friday 20 February 2026 02:05:13 +0000 (0:00:01.762) 0:07:12.589 ******* 2026-02-20 02:05:40.069117 | orchestrator | changed: [testbed-manager] 2026-02-20 02:05:40.069123 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:05:40.069129 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:05:40.069135 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:05:40.069140 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:05:40.069146 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:05:40.069152 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:05:40.069158 | orchestrator | 2026-02-20 02:05:40.069164 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-20 02:05:40.069170 | orchestrator | Friday 20 February 2026 02:05:14 +0000 (0:00:01.357) 0:07:13.947 ******* 2026-02-20 02:05:40.069176 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:05:40.069181 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:05:40.069187 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:05:40.069193 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:05:40.069199 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:05:40.069205 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:05:40.069211 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:05:40.069217 | orchestrator | 2026-02-20 02:05:40.069223 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-20 02:05:40.069229 | orchestrator | Friday 20 February 2026 02:05:16 +0000 (0:00:02.236) 0:07:16.183 ******* 2026-02-20 02:05:40.069249 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:05:40.069255 | orchestrator | 2026-02-20 02:05:40.069262 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-20 02:05:40.069268 | orchestrator | Friday 20 February 2026 02:05:17 +0000 (0:00:00.101) 0:07:16.285 ******* 2026-02-20 02:05:40.069274 | orchestrator | ok: [testbed-manager] 2026-02-20 02:05:40.069279 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:05:40.069285 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:05:40.069291 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:05:40.069297 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:05:40.069303 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:05:40.069308 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:05:40.069314 | orchestrator | 2026-02-20 02:05:40.069320 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-20 02:05:40.069327 | orchestrator | Friday 20 February 2026 02:05:18 +0000 (0:00:01.303) 0:07:17.589 ******* 2026-02-20 02:05:40.069332 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:05:40.069348 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:05:40.069353 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:05:40.069359 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:05:40.069365 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:05:40.069371 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:05:40.069376 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:05:40.069382 | orchestrator | 2026-02-20 02:05:40.069388 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-20 02:05:40.069394 | orchestrator | Friday 20 February 2026 02:05:19 +0000 (0:00:00.690) 0:07:18.280 ******* 2026-02-20 02:05:40.069400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:05:40.069407 | orchestrator | 2026-02-20 02:05:40.069413 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-20 02:05:40.069419 | orchestrator | Friday 20 February 2026 02:05:20 +0000 (0:00:01.284) 0:07:19.564 ******* 2026-02-20 02:05:40.069424 | orchestrator | ok: [testbed-manager] 2026-02-20 02:05:40.069430 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:05:40.069436 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:05:40.069442 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:05:40.069448 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:05:40.069453 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:05:40.069459 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:05:40.069465 | orchestrator | 2026-02-20 02:05:40.069471 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-20 02:05:40.069478 | orchestrator | Friday 20 February 2026 02:05:21 +0000 (0:00:00.983) 0:07:20.547 ******* 2026-02-20 02:05:40.069484 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-20 02:05:40.069501 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-20 02:05:40.069508 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-20 02:05:40.069514 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-20 02:05:40.069520 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-20 02:05:40.069527 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-20 02:05:40.069533 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-20 02:05:40.069539 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-20 02:05:40.069546 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-20 02:05:40.069552 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-20 02:05:40.069558 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-20 02:05:40.069565 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-20 02:05:40.069591 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-20 02:05:40.069599 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-20 02:05:40.069606 | orchestrator | 2026-02-20 02:05:40.069612 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-20 02:05:40.069619 | orchestrator | Friday 20 February 2026 02:05:23 +0000 (0:00:02.664) 0:07:23.211 ******* 2026-02-20 02:05:40.069625 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:05:40.069632 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:05:40.069638 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:05:40.069644 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:05:40.069651 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:05:40.069657 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:05:40.069663 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:05:40.069670 | orchestrator | 2026-02-20 02:05:40.069676 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-20 02:05:40.069683 | orchestrator | Friday 20 February 2026 02:05:24 +0000 (0:00:00.814) 0:07:24.025 ******* 2026-02-20 02:05:40.069690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:05:40.069697 | orchestrator | 2026-02-20 02:05:40.069704 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-20 02:05:40.069710 | orchestrator | Friday 20 February 2026 02:05:25 +0000 (0:00:00.971) 0:07:24.997 ******* 2026-02-20 02:05:40.069717 | orchestrator | ok: [testbed-manager] 2026-02-20 02:05:40.069723 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:05:40.069730 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:05:40.069736 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:05:40.069743 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:05:40.069749 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:05:40.069756 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:05:40.069762 | orchestrator | 2026-02-20 02:05:40.069768 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-20 02:05:40.069775 | orchestrator | Friday 20 February 2026 02:05:26 +0000 (0:00:00.978) 0:07:25.975 ******* 2026-02-20 02:05:40.069781 | orchestrator | ok: [testbed-manager] 2026-02-20 02:05:40.069787 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:05:40.069794 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:05:40.069800 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:05:40.069806 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:05:40.069813 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:05:40.069819 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:05:40.069825 | orchestrator | 2026-02-20 02:05:40.069832 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-20 02:05:40.069839 | orchestrator | Friday 20 February 2026 02:05:27 +0000 (0:00:01.188) 0:07:27.164 ******* 2026-02-20 02:05:40.069844 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:05:40.069850 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:05:40.069856 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:05:40.069862 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:05:40.069868 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:05:40.069874 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:05:40.069880 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:05:40.069886 | orchestrator | 2026-02-20 02:05:40.069892 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-20 02:05:40.069898 | orchestrator | Friday 20 February 2026 02:05:28 +0000 (0:00:00.555) 0:07:27.719 ******* 2026-02-20 02:05:40.069904 | orchestrator | ok: [testbed-manager] 2026-02-20 02:05:40.069910 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:05:40.069915 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:05:40.069921 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:05:40.069927 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:05:40.069936 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:05:40.069942 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:05:40.069948 | orchestrator | 2026-02-20 02:05:40.069954 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-20 02:05:40.069960 | orchestrator | Friday 20 February 2026 02:05:30 +0000 (0:00:01.885) 0:07:29.605 ******* 2026-02-20 02:05:40.069966 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:05:40.069972 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:05:40.069978 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:05:40.069984 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:05:40.069990 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:05:40.069996 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:05:40.070002 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:05:40.070008 | orchestrator | 2026-02-20 02:05:40.070053 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-20 02:05:40.070059 | orchestrator | Friday 20 February 2026 02:05:30 +0000 (0:00:00.560) 0:07:30.166 ******* 2026-02-20 02:05:40.070066 | orchestrator | ok: [testbed-manager] 2026-02-20 02:05:40.070072 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:05:40.070079 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:05:40.070085 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:05:40.070092 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:05:40.070099 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:05:40.070110 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:06:14.144124 | orchestrator | 2026-02-20 02:06:14.144209 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-20 02:06:14.144228 | orchestrator | Friday 20 February 2026 02:05:40 +0000 (0:00:09.163) 0:07:39.329 ******* 2026-02-20 02:06:14.144242 | orchestrator | ok: [testbed-manager] 2026-02-20 02:06:14.144256 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:06:14.144272 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:06:14.144285 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:06:14.144299 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:06:14.144312 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:06:14.144327 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:06:14.144341 | orchestrator | 2026-02-20 02:06:14.144354 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-20 02:06:14.144367 | orchestrator | Friday 20 February 2026 02:05:41 +0000 (0:00:01.584) 0:07:40.913 ******* 2026-02-20 02:06:14.144380 | orchestrator | ok: [testbed-manager] 2026-02-20 02:06:14.144394 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:06:14.144408 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:06:14.144423 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:06:14.144436 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:06:14.144449 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:06:14.144462 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:06:14.144475 | orchestrator | 2026-02-20 02:06:14.144488 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-20 02:06:14.144502 | orchestrator | Friday 20 February 2026 02:05:43 +0000 (0:00:01.828) 0:07:42.742 ******* 2026-02-20 02:06:14.144516 | orchestrator | ok: [testbed-manager] 2026-02-20 02:06:14.144530 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:06:14.144543 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:06:14.144556 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:06:14.144588 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:06:14.144603 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:06:14.144639 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:06:14.144653 | orchestrator | 2026-02-20 02:06:14.144666 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-20 02:06:14.144679 | orchestrator | Friday 20 February 2026 02:05:45 +0000 (0:00:01.764) 0:07:44.506 ******* 2026-02-20 02:06:14.144693 | orchestrator | ok: [testbed-manager] 2026-02-20 02:06:14.144708 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:06:14.144721 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:06:14.144761 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:06:14.144777 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:06:14.144791 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:06:14.144805 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:06:14.144818 | orchestrator | 2026-02-20 02:06:14.144832 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-20 02:06:14.144845 | orchestrator | Friday 20 February 2026 02:05:46 +0000 (0:00:01.018) 0:07:45.524 ******* 2026-02-20 02:06:14.144858 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:06:14.144872 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:06:14.144886 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:06:14.144900 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:06:14.144913 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:06:14.144926 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:06:14.144939 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:06:14.144953 | orchestrator | 2026-02-20 02:06:14.144966 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-20 02:06:14.144981 | orchestrator | Friday 20 February 2026 02:05:47 +0000 (0:00:01.146) 0:07:46.670 ******* 2026-02-20 02:06:14.144994 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:06:14.145007 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:06:14.145021 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:06:14.145034 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:06:14.145047 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:06:14.145061 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:06:14.145073 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:06:14.145087 | orchestrator | 2026-02-20 02:06:14.145100 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-20 02:06:14.145114 | orchestrator | Friday 20 February 2026 02:05:47 +0000 (0:00:00.565) 0:07:47.236 ******* 2026-02-20 02:06:14.145128 | orchestrator | ok: [testbed-manager] 2026-02-20 02:06:14.145157 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:06:14.145173 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:06:14.145186 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:06:14.145200 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:06:14.145213 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:06:14.145232 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:06:14.145245 | orchestrator | 2026-02-20 02:06:14.145259 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-20 02:06:14.145273 | orchestrator | Friday 20 February 2026 02:05:48 +0000 (0:00:00.553) 0:07:47.790 ******* 2026-02-20 02:06:14.145286 | orchestrator | ok: [testbed-manager] 2026-02-20 02:06:14.145299 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:06:14.145312 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:06:14.145326 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:06:14.145339 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:06:14.145352 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:06:14.145366 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:06:14.145379 | orchestrator | 2026-02-20 02:06:14.145393 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-20 02:06:14.145406 | orchestrator | Friday 20 February 2026 02:05:49 +0000 (0:00:00.619) 0:07:48.409 ******* 2026-02-20 02:06:14.145420 | orchestrator | ok: [testbed-manager] 2026-02-20 02:06:14.145434 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:06:14.145447 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:06:14.145460 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:06:14.145473 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:06:14.145486 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:06:14.145499 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:06:14.145512 | orchestrator | 2026-02-20 02:06:14.145526 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-20 02:06:14.145540 | orchestrator | Friday 20 February 2026 02:05:50 +0000 (0:00:00.885) 0:07:49.295 ******* 2026-02-20 02:06:14.145553 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:06:14.145566 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:06:14.145656 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:06:14.145672 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:06:14.145687 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:06:14.145703 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:06:14.145718 | orchestrator | ok: [testbed-manager] 2026-02-20 02:06:14.145732 | orchestrator | 2026-02-20 02:06:14.145763 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-20 02:06:14.145778 | orchestrator | Friday 20 February 2026 02:05:54 +0000 (0:00:04.417) 0:07:53.712 ******* 2026-02-20 02:06:14.145792 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:06:14.145807 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:06:14.145822 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:06:14.145837 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:06:14.145851 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:06:14.145867 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:06:14.145882 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:06:14.145898 | orchestrator | 2026-02-20 02:06:14.145913 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-20 02:06:14.145927 | orchestrator | Friday 20 February 2026 02:05:55 +0000 (0:00:00.589) 0:07:54.302 ******* 2026-02-20 02:06:14.145944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:06:14.145960 | orchestrator | 2026-02-20 02:06:14.145975 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-20 02:06:14.145990 | orchestrator | Friday 20 February 2026 02:05:56 +0000 (0:00:01.307) 0:07:55.609 ******* 2026-02-20 02:06:14.146005 | orchestrator | ok: [testbed-manager] 2026-02-20 02:06:14.146072 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:06:14.146087 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:06:14.146101 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:06:14.146114 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:06:14.146126 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:06:14.146139 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:06:14.146152 | orchestrator | 2026-02-20 02:06:14.146166 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-20 02:06:14.146180 | orchestrator | Friday 20 February 2026 02:05:58 +0000 (0:00:02.552) 0:07:58.161 ******* 2026-02-20 02:06:14.146193 | orchestrator | ok: [testbed-manager] 2026-02-20 02:06:14.146206 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:06:14.146219 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:06:14.146232 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:06:14.146245 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:06:14.146258 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:06:14.146272 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:06:14.146286 | orchestrator | 2026-02-20 02:06:14.146299 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-20 02:06:14.146311 | orchestrator | Friday 20 February 2026 02:06:00 +0000 (0:00:01.329) 0:07:59.490 ******* 2026-02-20 02:06:14.146324 | orchestrator | ok: [testbed-manager] 2026-02-20 02:06:14.146337 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:06:14.146351 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:06:14.146365 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:06:14.146378 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:06:14.146391 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:06:14.146404 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:06:14.146417 | orchestrator | 2026-02-20 02:06:14.146430 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-20 02:06:14.146444 | orchestrator | Friday 20 February 2026 02:06:01 +0000 (0:00:00.937) 0:08:00.428 ******* 2026-02-20 02:06:14.146457 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-20 02:06:14.146471 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-20 02:06:14.146493 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-20 02:06:14.146508 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-20 02:06:14.146527 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-20 02:06:14.146541 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-20 02:06:14.146555 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-20 02:06:14.146583 | orchestrator | 2026-02-20 02:06:14.146597 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-20 02:06:14.146610 | orchestrator | Friday 20 February 2026 02:06:03 +0000 (0:00:02.106) 0:08:02.534 ******* 2026-02-20 02:06:14.146624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:06:14.146638 | orchestrator | 2026-02-20 02:06:14.146653 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-20 02:06:14.146666 | orchestrator | Friday 20 February 2026 02:06:04 +0000 (0:00:00.951) 0:08:03.486 ******* 2026-02-20 02:06:14.146679 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:06:14.146692 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:06:14.146704 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:06:14.146718 | orchestrator | changed: [testbed-manager] 2026-02-20 02:06:14.146732 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:06:14.146745 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:06:14.146758 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:06:14.146771 | orchestrator | 2026-02-20 02:06:14.146792 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-20 02:06:48.555674 | orchestrator | Friday 20 February 2026 02:06:14 +0000 (0:00:09.913) 0:08:13.399 ******* 2026-02-20 02:06:48.555772 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:06:48.555783 | orchestrator | ok: [testbed-manager] 2026-02-20 02:06:48.555790 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:06:48.555796 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:06:48.555803 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:06:48.555808 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:06:48.555815 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:06:48.555821 | orchestrator | 2026-02-20 02:06:48.555828 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-20 02:06:48.555835 | orchestrator | Friday 20 February 2026 02:06:16 +0000 (0:00:02.077) 0:08:15.478 ******* 2026-02-20 02:06:48.555841 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:06:48.555847 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:06:48.555853 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:06:48.555859 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:06:48.555866 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:06:48.555872 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:06:48.555878 | orchestrator | 2026-02-20 02:06:48.555884 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-20 02:06:48.555890 | orchestrator | Friday 20 February 2026 02:06:17 +0000 (0:00:01.473) 0:08:16.951 ******* 2026-02-20 02:06:48.555896 | orchestrator | changed: [testbed-manager] 2026-02-20 02:06:48.555903 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:06:48.555909 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:06:48.555915 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:06:48.555922 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:06:48.555951 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:06:48.555958 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:06:48.555964 | orchestrator | 2026-02-20 02:06:48.555970 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-20 02:06:48.555976 | orchestrator | 2026-02-20 02:06:48.555982 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-20 02:06:48.555988 | orchestrator | Friday 20 February 2026 02:06:18 +0000 (0:00:01.282) 0:08:18.234 ******* 2026-02-20 02:06:48.555994 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:06:48.555999 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:06:48.556006 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:06:48.556012 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:06:48.556018 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:06:48.556024 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:06:48.556030 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:06:48.556036 | orchestrator | 2026-02-20 02:06:48.556042 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-20 02:06:48.556048 | orchestrator | 2026-02-20 02:06:48.556054 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-20 02:06:48.556060 | orchestrator | Friday 20 February 2026 02:06:19 +0000 (0:00:00.836) 0:08:19.070 ******* 2026-02-20 02:06:48.556066 | orchestrator | changed: [testbed-manager] 2026-02-20 02:06:48.556072 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:06:48.556079 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:06:48.556085 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:06:48.556091 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:06:48.556097 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:06:48.556103 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:06:48.556108 | orchestrator | 2026-02-20 02:06:48.556114 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-20 02:06:48.556120 | orchestrator | Friday 20 February 2026 02:06:21 +0000 (0:00:01.398) 0:08:20.469 ******* 2026-02-20 02:06:48.556126 | orchestrator | ok: [testbed-manager] 2026-02-20 02:06:48.556132 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:06:48.556138 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:06:48.556144 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:06:48.556149 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:06:48.556154 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:06:48.556161 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:06:48.556168 | orchestrator | 2026-02-20 02:06:48.556173 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-20 02:06:48.556178 | orchestrator | Friday 20 February 2026 02:06:22 +0000 (0:00:01.548) 0:08:22.017 ******* 2026-02-20 02:06:48.556184 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:06:48.556189 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:06:48.556195 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:06:48.556201 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:06:48.556207 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:06:48.556227 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:06:48.556234 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:06:48.556241 | orchestrator | 2026-02-20 02:06:48.556248 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-20 02:06:48.556255 | orchestrator | Friday 20 February 2026 02:06:23 +0000 (0:00:00.646) 0:08:22.664 ******* 2026-02-20 02:06:48.556263 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:06:48.556272 | orchestrator | 2026-02-20 02:06:48.556279 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-20 02:06:48.556285 | orchestrator | Friday 20 February 2026 02:06:24 +0000 (0:00:01.130) 0:08:23.794 ******* 2026-02-20 02:06:48.556293 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:06:48.556321 | orchestrator | 2026-02-20 02:06:48.556328 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-20 02:06:48.556334 | orchestrator | Friday 20 February 2026 02:06:25 +0000 (0:00:00.883) 0:08:24.677 ******* 2026-02-20 02:06:48.556340 | orchestrator | changed: [testbed-manager] 2026-02-20 02:06:48.556365 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:06:48.556373 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:06:48.556381 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:06:48.556388 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:06:48.556394 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:06:48.556401 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:06:48.556407 | orchestrator | 2026-02-20 02:06:48.556431 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-20 02:06:48.556438 | orchestrator | Friday 20 February 2026 02:06:35 +0000 (0:00:10.190) 0:08:34.868 ******* 2026-02-20 02:06:48.556445 | orchestrator | changed: [testbed-manager] 2026-02-20 02:06:48.556450 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:06:48.556455 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:06:48.556460 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:06:48.556464 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:06:48.556468 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:06:48.556473 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:06:48.556477 | orchestrator | 2026-02-20 02:06:48.556481 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-20 02:06:48.556486 | orchestrator | Friday 20 February 2026 02:06:36 +0000 (0:00:00.946) 0:08:35.815 ******* 2026-02-20 02:06:48.556490 | orchestrator | changed: [testbed-manager] 2026-02-20 02:06:48.556494 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:06:48.556497 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:06:48.556501 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:06:48.556505 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:06:48.556508 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:06:48.556512 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:06:48.556516 | orchestrator | 2026-02-20 02:06:48.556519 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-20 02:06:48.556523 | orchestrator | Friday 20 February 2026 02:06:38 +0000 (0:00:01.527) 0:08:37.342 ******* 2026-02-20 02:06:48.556527 | orchestrator | changed: [testbed-manager] 2026-02-20 02:06:48.556530 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:06:48.556534 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:06:48.556538 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:06:48.556541 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:06:48.556545 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:06:48.556549 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:06:48.556552 | orchestrator | 2026-02-20 02:06:48.556556 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-20 02:06:48.556584 | orchestrator | Friday 20 February 2026 02:06:40 +0000 (0:00:02.297) 0:08:39.639 ******* 2026-02-20 02:06:48.556589 | orchestrator | changed: [testbed-manager] 2026-02-20 02:06:48.556592 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:06:48.556596 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:06:48.556599 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:06:48.556603 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:06:48.556607 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:06:48.556611 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:06:48.556614 | orchestrator | 2026-02-20 02:06:48.556618 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-20 02:06:48.556622 | orchestrator | Friday 20 February 2026 02:06:41 +0000 (0:00:01.425) 0:08:41.064 ******* 2026-02-20 02:06:48.556625 | orchestrator | changed: [testbed-manager] 2026-02-20 02:06:48.556629 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:06:48.556639 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:06:48.556642 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:06:48.556646 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:06:48.556650 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:06:48.556653 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:06:48.556657 | orchestrator | 2026-02-20 02:06:48.556661 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-20 02:06:48.556664 | orchestrator | 2026-02-20 02:06:48.556668 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-20 02:06:48.556672 | orchestrator | Friday 20 February 2026 02:06:43 +0000 (0:00:01.258) 0:08:42.323 ******* 2026-02-20 02:06:48.556676 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:06:48.556680 | orchestrator | 2026-02-20 02:06:48.556683 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-20 02:06:48.556687 | orchestrator | Friday 20 February 2026 02:06:44 +0000 (0:00:01.081) 0:08:43.404 ******* 2026-02-20 02:06:48.556691 | orchestrator | ok: [testbed-manager] 2026-02-20 02:06:48.556695 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:06:48.556698 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:06:48.556702 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:06:48.556705 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:06:48.556709 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:06:48.556718 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:06:48.556721 | orchestrator | 2026-02-20 02:06:48.556725 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-20 02:06:48.556729 | orchestrator | Friday 20 February 2026 02:06:45 +0000 (0:00:01.190) 0:08:44.595 ******* 2026-02-20 02:06:48.556732 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:06:48.556736 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:06:48.556740 | orchestrator | changed: [testbed-manager] 2026-02-20 02:06:48.556744 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:06:48.556747 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:06:48.556751 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:06:48.556755 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:06:48.556758 | orchestrator | 2026-02-20 02:06:48.556762 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-20 02:06:48.556766 | orchestrator | Friday 20 February 2026 02:06:46 +0000 (0:00:01.199) 0:08:45.794 ******* 2026-02-20 02:06:48.556769 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:06:48.556773 | orchestrator | 2026-02-20 02:06:48.556777 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-20 02:06:48.556780 | orchestrator | Friday 20 February 2026 02:06:47 +0000 (0:00:01.093) 0:08:46.888 ******* 2026-02-20 02:06:48.556784 | orchestrator | ok: [testbed-manager] 2026-02-20 02:06:48.556788 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:06:48.556791 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:06:48.556795 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:06:48.556799 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:06:48.556802 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:06:48.556806 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:06:48.556810 | orchestrator | 2026-02-20 02:06:48.556816 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-20 02:06:50.405505 | orchestrator | Friday 20 February 2026 02:06:48 +0000 (0:00:00.926) 0:08:47.814 ******* 2026-02-20 02:06:50.405604 | orchestrator | changed: [testbed-manager] 2026-02-20 02:06:50.405615 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:06:50.405622 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:06:50.405629 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:06:50.405635 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:06:50.405642 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:06:50.405648 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:06:50.405671 | orchestrator | 2026-02-20 02:06:50.405676 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:06:50.405681 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-20 02:06:50.405689 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-20 02:06:50.405695 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-20 02:06:50.405702 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-20 02:06:50.405706 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-20 02:06:50.405710 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-20 02:06:50.405713 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-20 02:06:50.405717 | orchestrator | 2026-02-20 02:06:50.405721 | orchestrator | 2026-02-20 02:06:50.405725 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:06:50.405729 | orchestrator | Friday 20 February 2026 02:06:49 +0000 (0:00:01.246) 0:08:49.061 ******* 2026-02-20 02:06:50.405735 | orchestrator | =============================================================================== 2026-02-20 02:06:50.405740 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.80s 2026-02-20 02:06:50.405747 | orchestrator | osism.commons.packages : Download required packages -------------------- 40.86s 2026-02-20 02:06:50.405753 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 30.38s 2026-02-20 02:06:50.405759 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.79s 2026-02-20 02:06:50.405765 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.25s 2026-02-20 02:06:50.405771 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.04s 2026-02-20 02:06:50.405777 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.60s 2026-02-20 02:06:50.405781 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.62s 2026-02-20 02:06:50.405785 | orchestrator | osism.services.smartd : Install smartmontools package ------------------ 10.19s 2026-02-20 02:06:50.405788 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.91s 2026-02-20 02:06:50.405792 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 9.60s 2026-02-20 02:06:50.405797 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.58s 2026-02-20 02:06:50.405802 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 9.16s 2026-02-20 02:06:50.405821 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.16s 2026-02-20 02:06:50.405825 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.92s 2026-02-20 02:06:50.405829 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.79s 2026-02-20 02:06:50.405832 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.81s 2026-02-20 02:06:50.405836 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.66s 2026-02-20 02:06:50.405840 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 7.07s 2026-02-20 02:06:50.405844 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.44s 2026-02-20 02:06:50.793522 | orchestrator | + osism apply fail2ban 2026-02-20 02:07:04.030908 | orchestrator | 2026-02-20 02:07:04 | INFO  | Task d1f48141-bbb9-4c4a-b23c-619c1514e7ae (fail2ban) was prepared for execution. 2026-02-20 02:07:04.031039 | orchestrator | 2026-02-20 02:07:04 | INFO  | It takes a moment until task d1f48141-bbb9-4c4a-b23c-619c1514e7ae (fail2ban) has been started and output is visible here. 2026-02-20 02:07:28.521668 | orchestrator | 2026-02-20 02:07:28.521747 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-20 02:07:28.521754 | orchestrator | 2026-02-20 02:07:28.521759 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-20 02:07:28.521764 | orchestrator | Friday 20 February 2026 02:07:09 +0000 (0:00:00.307) 0:00:00.307 ******* 2026-02-20 02:07:28.521769 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:07:28.521775 | orchestrator | 2026-02-20 02:07:28.521779 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-20 02:07:28.521783 | orchestrator | Friday 20 February 2026 02:07:11 +0000 (0:00:01.328) 0:00:01.635 ******* 2026-02-20 02:07:28.521787 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:07:28.521793 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:07:28.521796 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:07:28.521800 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:07:28.521804 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:07:28.521807 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:07:28.521811 | orchestrator | changed: [testbed-manager] 2026-02-20 02:07:28.521815 | orchestrator | 2026-02-20 02:07:28.521819 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-20 02:07:28.521823 | orchestrator | Friday 20 February 2026 02:07:22 +0000 (0:00:11.778) 0:00:13.414 ******* 2026-02-20 02:07:28.521827 | orchestrator | changed: [testbed-manager] 2026-02-20 02:07:28.521830 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:07:28.521834 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:07:28.521838 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:07:28.521841 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:07:28.521845 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:07:28.521849 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:07:28.521852 | orchestrator | 2026-02-20 02:07:28.521856 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-20 02:07:28.521860 | orchestrator | Friday 20 February 2026 02:07:24 +0000 (0:00:01.698) 0:00:15.112 ******* 2026-02-20 02:07:28.521864 | orchestrator | ok: [testbed-manager] 2026-02-20 02:07:28.521868 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:07:28.521872 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:07:28.521876 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:07:28.521880 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:07:28.521884 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:07:28.521887 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:07:28.521891 | orchestrator | 2026-02-20 02:07:28.521895 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-20 02:07:28.521898 | orchestrator | Friday 20 February 2026 02:07:26 +0000 (0:00:01.583) 0:00:16.696 ******* 2026-02-20 02:07:28.521902 | orchestrator | changed: [testbed-manager] 2026-02-20 02:07:28.521906 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:07:28.521910 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:07:28.521913 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:07:28.521917 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:07:28.521921 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:07:28.521924 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:07:28.521928 | orchestrator | 2026-02-20 02:07:28.521932 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:07:28.521936 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:07:28.521963 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:07:28.521969 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:07:28.521976 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:07:28.521980 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:07:28.521984 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:07:28.521989 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:07:28.521992 | orchestrator | 2026-02-20 02:07:28.521996 | orchestrator | 2026-02-20 02:07:28.522000 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:07:28.522004 | orchestrator | Friday 20 February 2026 02:07:27 +0000 (0:00:01.884) 0:00:18.581 ******* 2026-02-20 02:07:28.522008 | orchestrator | =============================================================================== 2026-02-20 02:07:28.522051 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.78s 2026-02-20 02:07:28.522056 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.88s 2026-02-20 02:07:28.522060 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.70s 2026-02-20 02:07:28.522064 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.58s 2026-02-20 02:07:28.522068 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.33s 2026-02-20 02:07:28.922681 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-20 02:07:28.922785 | orchestrator | + osism apply network 2026-02-20 02:07:41.518246 | orchestrator | 2026-02-20 02:07:41 | INFO  | Task a993b4bf-40dc-451a-84a8-6ef45d9dbb29 (network) was prepared for execution. 2026-02-20 02:07:41.518339 | orchestrator | 2026-02-20 02:07:41 | INFO  | It takes a moment until task a993b4bf-40dc-451a-84a8-6ef45d9dbb29 (network) has been started and output is visible here. 2026-02-20 02:08:15.023263 | orchestrator | 2026-02-20 02:08:15.023340 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-20 02:08:15.023347 | orchestrator | 2026-02-20 02:08:15.023352 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-20 02:08:15.023356 | orchestrator | Friday 20 February 2026 02:07:46 +0000 (0:00:00.280) 0:00:00.280 ******* 2026-02-20 02:08:15.023361 | orchestrator | ok: [testbed-manager] 2026-02-20 02:08:15.023366 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:08:15.023370 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:08:15.023374 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:08:15.023378 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:08:15.023381 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:08:15.023385 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:08:15.023389 | orchestrator | 2026-02-20 02:08:15.023393 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-20 02:08:15.023397 | orchestrator | Friday 20 February 2026 02:07:47 +0000 (0:00:00.833) 0:00:01.114 ******* 2026-02-20 02:08:15.023402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:08:15.023408 | orchestrator | 2026-02-20 02:08:15.023412 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-20 02:08:15.023433 | orchestrator | Friday 20 February 2026 02:07:48 +0000 (0:00:01.394) 0:00:02.509 ******* 2026-02-20 02:08:15.023440 | orchestrator | ok: [testbed-manager] 2026-02-20 02:08:15.023445 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:08:15.023449 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:08:15.023453 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:08:15.023457 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:08:15.023460 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:08:15.023464 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:08:15.023468 | orchestrator | 2026-02-20 02:08:15.023472 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-20 02:08:15.023475 | orchestrator | Friday 20 February 2026 02:07:51 +0000 (0:00:02.432) 0:00:04.941 ******* 2026-02-20 02:08:15.023479 | orchestrator | ok: [testbed-manager] 2026-02-20 02:08:15.023483 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:08:15.023487 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:08:15.023491 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:08:15.023495 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:08:15.023498 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:08:15.023502 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:08:15.023506 | orchestrator | 2026-02-20 02:08:15.023510 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-20 02:08:15.023513 | orchestrator | Friday 20 February 2026 02:07:53 +0000 (0:00:02.279) 0:00:07.221 ******* 2026-02-20 02:08:15.023539 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-20 02:08:15.023544 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-20 02:08:15.023548 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-20 02:08:15.023552 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-20 02:08:15.023556 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-20 02:08:15.023560 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-20 02:08:15.023564 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-20 02:08:15.023568 | orchestrator | 2026-02-20 02:08:15.023584 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-20 02:08:15.023588 | orchestrator | Friday 20 February 2026 02:07:54 +0000 (0:00:01.075) 0:00:08.297 ******* 2026-02-20 02:08:15.023592 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-20 02:08:15.023597 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-20 02:08:15.023601 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 02:08:15.023604 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-20 02:08:15.023608 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 02:08:15.023612 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-20 02:08:15.023615 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-20 02:08:15.023619 | orchestrator | 2026-02-20 02:08:15.023623 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-20 02:08:15.023627 | orchestrator | Friday 20 February 2026 02:07:58 +0000 (0:00:04.199) 0:00:12.496 ******* 2026-02-20 02:08:15.023631 | orchestrator | changed: [testbed-manager] 2026-02-20 02:08:15.023635 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:08:15.023639 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:08:15.023642 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:08:15.023649 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:08:15.023653 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:08:15.023657 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:08:15.023660 | orchestrator | 2026-02-20 02:08:15.023664 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-20 02:08:15.023668 | orchestrator | Friday 20 February 2026 02:08:00 +0000 (0:00:01.767) 0:00:14.264 ******* 2026-02-20 02:08:15.023672 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 02:08:15.023675 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 02:08:15.023679 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-20 02:08:15.023683 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-20 02:08:15.023691 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-20 02:08:15.023695 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-20 02:08:15.023699 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-20 02:08:15.023703 | orchestrator | 2026-02-20 02:08:15.023706 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-20 02:08:15.023710 | orchestrator | Friday 20 February 2026 02:08:02 +0000 (0:00:01.926) 0:00:16.190 ******* 2026-02-20 02:08:15.023714 | orchestrator | ok: [testbed-manager] 2026-02-20 02:08:15.023717 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:08:15.023721 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:08:15.023725 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:08:15.023729 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:08:15.023733 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:08:15.023736 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:08:15.023740 | orchestrator | 2026-02-20 02:08:15.023744 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-20 02:08:15.023758 | orchestrator | Friday 20 February 2026 02:08:03 +0000 (0:00:01.240) 0:00:17.430 ******* 2026-02-20 02:08:15.023762 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:08:15.023766 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:08:15.023770 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:08:15.023774 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:08:15.023777 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:08:15.023781 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:08:15.023785 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:08:15.023788 | orchestrator | 2026-02-20 02:08:15.023792 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-20 02:08:15.023796 | orchestrator | Friday 20 February 2026 02:08:04 +0000 (0:00:00.741) 0:00:18.172 ******* 2026-02-20 02:08:15.023800 | orchestrator | ok: [testbed-manager] 2026-02-20 02:08:15.023804 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:08:15.023807 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:08:15.023811 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:08:15.023815 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:08:15.023819 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:08:15.023823 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:08:15.023828 | orchestrator | 2026-02-20 02:08:15.023833 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-20 02:08:15.023838 | orchestrator | Friday 20 February 2026 02:08:06 +0000 (0:00:02.571) 0:00:20.744 ******* 2026-02-20 02:08:15.023842 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:08:15.023847 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:08:15.023851 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:08:15.023856 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:08:15.023860 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:08:15.023864 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:08:15.023870 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-20 02:08:15.023876 | orchestrator | 2026-02-20 02:08:15.023880 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-20 02:08:15.023885 | orchestrator | Friday 20 February 2026 02:08:07 +0000 (0:00:00.990) 0:00:21.734 ******* 2026-02-20 02:08:15.023889 | orchestrator | ok: [testbed-manager] 2026-02-20 02:08:15.023894 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:08:15.023898 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:08:15.023902 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:08:15.023907 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:08:15.023912 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:08:15.023916 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:08:15.023920 | orchestrator | 2026-02-20 02:08:15.023925 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-20 02:08:15.023929 | orchestrator | Friday 20 February 2026 02:08:09 +0000 (0:00:01.971) 0:00:23.706 ******* 2026-02-20 02:08:15.023934 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:08:15.023944 | orchestrator | 2026-02-20 02:08:15.023948 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-20 02:08:15.023953 | orchestrator | Friday 20 February 2026 02:08:11 +0000 (0:00:01.483) 0:00:25.190 ******* 2026-02-20 02:08:15.023957 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:08:15.023961 | orchestrator | ok: [testbed-manager] 2026-02-20 02:08:15.023966 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:08:15.023970 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:08:15.023975 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:08:15.023979 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:08:15.023983 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:08:15.023987 | orchestrator | 2026-02-20 02:08:15.023992 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-20 02:08:15.023996 | orchestrator | Friday 20 February 2026 02:08:12 +0000 (0:00:01.076) 0:00:26.267 ******* 2026-02-20 02:08:15.024003 | orchestrator | ok: [testbed-manager] 2026-02-20 02:08:15.024008 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:08:15.024015 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:08:15.024021 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:08:15.024026 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:08:15.024032 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:08:15.024038 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:08:15.024044 | orchestrator | 2026-02-20 02:08:15.024050 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-20 02:08:15.024057 | orchestrator | Friday 20 February 2026 02:08:13 +0000 (0:00:01.070) 0:00:27.337 ******* 2026-02-20 02:08:15.024068 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-20 02:08:15.024073 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-20 02:08:15.024077 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-20 02:08:15.024081 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-20 02:08:15.024084 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-20 02:08:15.024088 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-20 02:08:15.024092 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-20 02:08:15.024095 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-20 02:08:15.024099 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-20 02:08:15.024103 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-20 02:08:15.024106 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-20 02:08:15.024110 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-20 02:08:15.024116 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-20 02:08:15.024122 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-20 02:08:15.024127 | orchestrator | 2026-02-20 02:08:15.024136 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-20 02:08:34.127500 | orchestrator | Friday 20 February 2026 02:08:15 +0000 (0:00:01.475) 0:00:28.812 ******* 2026-02-20 02:08:34.127635 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:08:34.127646 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:08:34.127653 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:08:34.127659 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:08:34.127665 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:08:34.127671 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:08:34.127677 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:08:34.127683 | orchestrator | 2026-02-20 02:08:34.127707 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-20 02:08:34.127714 | orchestrator | Friday 20 February 2026 02:08:15 +0000 (0:00:00.708) 0:00:29.521 ******* 2026-02-20 02:08:34.127722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-2, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:08:34.127730 | orchestrator | 2026-02-20 02:08:34.127736 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-20 02:08:34.127742 | orchestrator | Friday 20 February 2026 02:08:20 +0000 (0:00:05.143) 0:00:34.665 ******* 2026-02-20 02:08:34.127749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-20 02:08:34.127758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-20 02:08:34.127768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-20 02:08:34.127778 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-20 02:08:34.127788 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-20 02:08:34.127798 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-20 02:08:34.127808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-20 02:08:34.127832 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-20 02:08:34.127843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-20 02:08:34.127858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-20 02:08:34.127869 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-20 02:08:34.127895 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-20 02:08:34.127914 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-20 02:08:34.127922 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-20 02:08:34.127928 | orchestrator | 2026-02-20 02:08:34.127934 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-20 02:08:34.127940 | orchestrator | Friday 20 February 2026 02:08:27 +0000 (0:00:06.579) 0:00:41.244 ******* 2026-02-20 02:08:34.127946 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-20 02:08:34.127952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-20 02:08:34.127958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-20 02:08:34.127964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-20 02:08:34.127969 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-20 02:08:34.127975 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-20 02:08:34.127981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-20 02:08:34.127987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-20 02:08:34.128008 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-20 02:08:34.128014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-20 02:08:34.128020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-20 02:08:34.128032 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-20 02:08:34.128052 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-20 02:08:42.488838 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-20 02:08:42.488949 | orchestrator | 2026-02-20 02:08:42.488968 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-20 02:08:42.488981 | orchestrator | Friday 20 February 2026 02:08:34 +0000 (0:00:06.668) 0:00:47.912 ******* 2026-02-20 02:08:42.488995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:08:42.489006 | orchestrator | 2026-02-20 02:08:42.489018 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-20 02:08:42.489031 | orchestrator | Friday 20 February 2026 02:08:35 +0000 (0:00:01.410) 0:00:49.322 ******* 2026-02-20 02:08:42.489044 | orchestrator | ok: [testbed-manager] 2026-02-20 02:08:42.489057 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:08:42.489069 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:08:42.489081 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:08:42.489093 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:08:42.489105 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:08:42.489118 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:08:42.489132 | orchestrator | 2026-02-20 02:08:42.489145 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-20 02:08:42.489158 | orchestrator | Friday 20 February 2026 02:08:37 +0000 (0:00:02.352) 0:00:51.675 ******* 2026-02-20 02:08:42.489170 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-20 02:08:42.489183 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-20 02:08:42.489197 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-20 02:08:42.489210 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-20 02:08:42.489223 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:08:42.489237 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-20 02:08:42.489250 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-20 02:08:42.489263 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-20 02:08:42.489276 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-20 02:08:42.489289 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:08:42.489303 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-20 02:08:42.489316 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-20 02:08:42.489329 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-20 02:08:42.489342 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-20 02:08:42.489383 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:08:42.489392 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-20 02:08:42.489400 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-20 02:08:42.489408 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-20 02:08:42.489415 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-20 02:08:42.489423 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:08:42.489445 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-20 02:08:42.489453 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-20 02:08:42.489461 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-20 02:08:42.489468 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-20 02:08:42.489476 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:08:42.489484 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-20 02:08:42.489491 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-20 02:08:42.489499 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-20 02:08:42.489535 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-20 02:08:42.489543 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:08:42.489551 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-20 02:08:42.489559 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-20 02:08:42.489567 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-20 02:08:42.489575 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-20 02:08:42.489582 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:08:42.489590 | orchestrator | 2026-02-20 02:08:42.489596 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-20 02:08:42.489620 | orchestrator | Friday 20 February 2026 02:08:40 +0000 (0:00:02.565) 0:00:54.241 ******* 2026-02-20 02:08:42.489627 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:08:42.489634 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:08:42.489641 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:08:42.489648 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:08:42.489654 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:08:42.489661 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:08:42.489668 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:08:42.489674 | orchestrator | 2026-02-20 02:08:42.489681 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-20 02:08:42.489688 | orchestrator | Friday 20 February 2026 02:08:41 +0000 (0:00:00.741) 0:00:54.982 ******* 2026-02-20 02:08:42.489694 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:08:42.489701 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:08:42.489707 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:08:42.489714 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:08:42.489721 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:08:42.489727 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:08:42.489734 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:08:42.489740 | orchestrator | 2026-02-20 02:08:42.489747 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:08:42.489755 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-20 02:08:42.489763 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 02:08:42.489777 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 02:08:42.489784 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 02:08:42.489791 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 02:08:42.489797 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 02:08:42.489804 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 02:08:42.489811 | orchestrator | 2026-02-20 02:08:42.489817 | orchestrator | 2026-02-20 02:08:42.489824 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:08:42.489831 | orchestrator | Friday 20 February 2026 02:08:42 +0000 (0:00:00.834) 0:00:55.816 ******* 2026-02-20 02:08:42.489837 | orchestrator | =============================================================================== 2026-02-20 02:08:42.489844 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.67s 2026-02-20 02:08:42.489851 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.58s 2026-02-20 02:08:42.489857 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 5.14s 2026-02-20 02:08:42.489864 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 4.20s 2026-02-20 02:08:42.489871 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.57s 2026-02-20 02:08:42.489877 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.57s 2026-02-20 02:08:42.489884 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.43s 2026-02-20 02:08:42.489895 | orchestrator | osism.commons.network : List existing configuration files --------------- 2.35s 2026-02-20 02:08:42.489901 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 2.28s 2026-02-20 02:08:42.489908 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.97s 2026-02-20 02:08:42.489915 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.93s 2026-02-20 02:08:42.489921 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.77s 2026-02-20 02:08:42.489928 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.48s 2026-02-20 02:08:42.489934 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.48s 2026-02-20 02:08:42.489941 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.41s 2026-02-20 02:08:42.489947 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.39s 2026-02-20 02:08:42.489954 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.24s 2026-02-20 02:08:42.489961 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.08s 2026-02-20 02:08:42.489967 | orchestrator | osism.commons.network : Create required directories --------------------- 1.08s 2026-02-20 02:08:42.489974 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 1.07s 2026-02-20 02:08:42.841175 | orchestrator | + osism apply wireguard 2026-02-20 02:08:55.122103 | orchestrator | 2026-02-20 02:08:55 | INFO  | Task e964f81a-3787-4383-9eb1-682a928f87fe (wireguard) was prepared for execution. 2026-02-20 02:08:55.122168 | orchestrator | 2026-02-20 02:08:55 | INFO  | It takes a moment until task e964f81a-3787-4383-9eb1-682a928f87fe (wireguard) has been started and output is visible here. 2026-02-20 02:09:17.467628 | orchestrator | 2026-02-20 02:09:17.467766 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-20 02:09:17.467782 | orchestrator | 2026-02-20 02:09:17.467794 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-20 02:09:17.467806 | orchestrator | Friday 20 February 2026 02:08:59 +0000 (0:00:00.289) 0:00:00.289 ******* 2026-02-20 02:09:17.467817 | orchestrator | ok: [testbed-manager] 2026-02-20 02:09:17.467829 | orchestrator | 2026-02-20 02:09:17.467839 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-20 02:09:17.467850 | orchestrator | Friday 20 February 2026 02:09:01 +0000 (0:00:01.744) 0:00:02.033 ******* 2026-02-20 02:09:17.467862 | orchestrator | changed: [testbed-manager] 2026-02-20 02:09:17.467879 | orchestrator | 2026-02-20 02:09:17.467891 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-20 02:09:17.467902 | orchestrator | Friday 20 February 2026 02:09:09 +0000 (0:00:07.397) 0:00:09.431 ******* 2026-02-20 02:09:17.467913 | orchestrator | changed: [testbed-manager] 2026-02-20 02:09:17.467924 | orchestrator | 2026-02-20 02:09:17.467934 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-20 02:09:17.467945 | orchestrator | Friday 20 February 2026 02:09:09 +0000 (0:00:00.657) 0:00:10.088 ******* 2026-02-20 02:09:17.467956 | orchestrator | changed: [testbed-manager] 2026-02-20 02:09:17.467967 | orchestrator | 2026-02-20 02:09:17.467977 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-20 02:09:17.467988 | orchestrator | Friday 20 February 2026 02:09:10 +0000 (0:00:00.471) 0:00:10.560 ******* 2026-02-20 02:09:17.467999 | orchestrator | ok: [testbed-manager] 2026-02-20 02:09:17.468010 | orchestrator | 2026-02-20 02:09:17.468020 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-20 02:09:17.468031 | orchestrator | Friday 20 February 2026 02:09:11 +0000 (0:00:00.749) 0:00:11.309 ******* 2026-02-20 02:09:17.468042 | orchestrator | ok: [testbed-manager] 2026-02-20 02:09:17.468053 | orchestrator | 2026-02-20 02:09:17.468063 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-20 02:09:17.468074 | orchestrator | Friday 20 February 2026 02:09:11 +0000 (0:00:00.454) 0:00:11.763 ******* 2026-02-20 02:09:17.468085 | orchestrator | ok: [testbed-manager] 2026-02-20 02:09:17.468096 | orchestrator | 2026-02-20 02:09:17.468107 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-20 02:09:17.468117 | orchestrator | Friday 20 February 2026 02:09:11 +0000 (0:00:00.452) 0:00:12.216 ******* 2026-02-20 02:09:17.468128 | orchestrator | changed: [testbed-manager] 2026-02-20 02:09:17.468139 | orchestrator | 2026-02-20 02:09:17.468153 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-20 02:09:17.468166 | orchestrator | Friday 20 February 2026 02:09:13 +0000 (0:00:01.261) 0:00:13.477 ******* 2026-02-20 02:09:17.468178 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-20 02:09:17.468191 | orchestrator | changed: [testbed-manager] 2026-02-20 02:09:17.468203 | orchestrator | 2026-02-20 02:09:17.468218 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-20 02:09:17.468237 | orchestrator | Friday 20 February 2026 02:09:14 +0000 (0:00:00.987) 0:00:14.465 ******* 2026-02-20 02:09:17.468255 | orchestrator | changed: [testbed-manager] 2026-02-20 02:09:17.468275 | orchestrator | 2026-02-20 02:09:17.468293 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-20 02:09:17.468312 | orchestrator | Friday 20 February 2026 02:09:16 +0000 (0:00:01.892) 0:00:16.358 ******* 2026-02-20 02:09:17.468330 | orchestrator | changed: [testbed-manager] 2026-02-20 02:09:17.468348 | orchestrator | 2026-02-20 02:09:17.468366 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:09:17.468384 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:09:17.468403 | orchestrator | 2026-02-20 02:09:17.468421 | orchestrator | 2026-02-20 02:09:17.468440 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:09:17.468474 | orchestrator | Friday 20 February 2026 02:09:17 +0000 (0:00:00.979) 0:00:17.337 ******* 2026-02-20 02:09:17.468650 | orchestrator | =============================================================================== 2026-02-20 02:09:17.468673 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.40s 2026-02-20 02:09:17.468691 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.89s 2026-02-20 02:09:17.468708 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.74s 2026-02-20 02:09:17.468723 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.26s 2026-02-20 02:09:17.468741 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.99s 2026-02-20 02:09:17.468758 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.98s 2026-02-20 02:09:17.468774 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.75s 2026-02-20 02:09:17.468790 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.66s 2026-02-20 02:09:17.468806 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.47s 2026-02-20 02:09:17.468822 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.45s 2026-02-20 02:09:17.468837 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2026-02-20 02:09:17.827085 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-20 02:09:17.869190 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-20 02:09:17.869305 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-20 02:09:17.949981 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 185 0 --:--:-- --:--:-- --:--:-- 187 2026-02-20 02:09:17.965074 | orchestrator | + osism apply --environment custom workarounds 2026-02-20 02:09:20.108732 | orchestrator | 2026-02-20 02:09:20 | INFO  | Trying to run play workarounds in environment custom 2026-02-20 02:09:30.248833 | orchestrator | 2026-02-20 02:09:30 | INFO  | Task 46a953b3-e651-4508-97c0-9d90f9c0e8d9 (workarounds) was prepared for execution. 2026-02-20 02:09:30.248934 | orchestrator | 2026-02-20 02:09:30 | INFO  | It takes a moment until task 46a953b3-e651-4508-97c0-9d90f9c0e8d9 (workarounds) has been started and output is visible here. 2026-02-20 02:09:56.738768 | orchestrator | 2026-02-20 02:09:56.738856 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 02:09:56.738867 | orchestrator | 2026-02-20 02:09:56.738874 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-20 02:09:56.738881 | orchestrator | Friday 20 February 2026 02:09:34 +0000 (0:00:00.128) 0:00:00.129 ******* 2026-02-20 02:09:56.738889 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-20 02:09:56.738896 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-20 02:09:56.738903 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-20 02:09:56.738909 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-20 02:09:56.738916 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-20 02:09:56.738923 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-20 02:09:56.738930 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-20 02:09:56.738937 | orchestrator | 2026-02-20 02:09:56.738943 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-20 02:09:56.738950 | orchestrator | 2026-02-20 02:09:56.738957 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-20 02:09:56.738965 | orchestrator | Friday 20 February 2026 02:09:35 +0000 (0:00:00.978) 0:00:01.107 ******* 2026-02-20 02:09:56.738972 | orchestrator | ok: [testbed-manager] 2026-02-20 02:09:56.739004 | orchestrator | 2026-02-20 02:09:56.739010 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-20 02:09:56.739014 | orchestrator | 2026-02-20 02:09:56.739018 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-20 02:09:56.739022 | orchestrator | Friday 20 February 2026 02:09:38 +0000 (0:00:02.518) 0:00:03.626 ******* 2026-02-20 02:09:56.739026 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:09:56.739030 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:09:56.739034 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:09:56.739037 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:09:56.739041 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:09:56.739045 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:09:56.739049 | orchestrator | 2026-02-20 02:09:56.739052 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-20 02:09:56.739056 | orchestrator | 2026-02-20 02:09:56.739060 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-20 02:09:56.739064 | orchestrator | Friday 20 February 2026 02:09:40 +0000 (0:00:02.031) 0:00:05.657 ******* 2026-02-20 02:09:56.739068 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-20 02:09:56.739073 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-20 02:09:56.739077 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-20 02:09:56.739080 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-20 02:09:56.739084 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-20 02:09:56.739099 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-20 02:09:56.739104 | orchestrator | 2026-02-20 02:09:56.739107 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-20 02:09:56.739111 | orchestrator | Friday 20 February 2026 02:09:41 +0000 (0:00:01.581) 0:00:07.239 ******* 2026-02-20 02:09:56.739115 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:09:56.739119 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:09:56.739123 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:09:56.739126 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:09:56.739130 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:09:56.739134 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:09:56.739137 | orchestrator | 2026-02-20 02:09:56.739141 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-20 02:09:56.739145 | orchestrator | Friday 20 February 2026 02:09:44 +0000 (0:00:02.838) 0:00:10.078 ******* 2026-02-20 02:09:56.739149 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:09:56.739153 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:09:56.739157 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:09:56.739160 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:09:56.739164 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:09:56.739168 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:09:56.739171 | orchestrator | 2026-02-20 02:09:56.739175 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-20 02:09:56.739179 | orchestrator | 2026-02-20 02:09:56.739183 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-20 02:09:56.739186 | orchestrator | Friday 20 February 2026 02:09:45 +0000 (0:00:00.822) 0:00:10.901 ******* 2026-02-20 02:09:56.739190 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:09:56.739194 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:09:56.739197 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:09:56.739201 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:09:56.739205 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:09:56.739208 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:09:56.739228 | orchestrator | changed: [testbed-manager] 2026-02-20 02:09:56.739232 | orchestrator | 2026-02-20 02:09:56.739242 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-20 02:09:56.739245 | orchestrator | Friday 20 February 2026 02:09:47 +0000 (0:00:01.700) 0:00:12.601 ******* 2026-02-20 02:09:56.739249 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:09:56.739253 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:09:56.739257 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:09:56.739260 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:09:56.739266 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:09:56.739272 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:09:56.739294 | orchestrator | changed: [testbed-manager] 2026-02-20 02:09:56.739300 | orchestrator | 2026-02-20 02:09:56.739307 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-20 02:09:56.739314 | orchestrator | Friday 20 February 2026 02:09:49 +0000 (0:00:01.687) 0:00:14.289 ******* 2026-02-20 02:09:56.739322 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:09:56.739329 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:09:56.739335 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:09:56.739338 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:09:56.739343 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:09:56.739347 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:09:56.739351 | orchestrator | ok: [testbed-manager] 2026-02-20 02:09:56.739355 | orchestrator | 2026-02-20 02:09:56.739360 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-20 02:09:56.739364 | orchestrator | Friday 20 February 2026 02:09:50 +0000 (0:00:01.746) 0:00:16.036 ******* 2026-02-20 02:09:56.739368 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:09:56.739373 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:09:56.739377 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:09:56.739381 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:09:56.739385 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:09:56.739389 | orchestrator | changed: [testbed-manager] 2026-02-20 02:09:56.739394 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:09:56.739398 | orchestrator | 2026-02-20 02:09:56.739403 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-20 02:09:56.739408 | orchestrator | Friday 20 February 2026 02:09:52 +0000 (0:00:02.005) 0:00:18.041 ******* 2026-02-20 02:09:56.739414 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:09:56.739419 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:09:56.739429 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:09:56.739437 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:09:56.739442 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:09:56.739449 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:09:56.739454 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:09:56.739460 | orchestrator | 2026-02-20 02:09:56.739483 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-20 02:09:56.739490 | orchestrator | 2026-02-20 02:09:56.739496 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-20 02:09:56.739503 | orchestrator | Friday 20 February 2026 02:09:53 +0000 (0:00:00.711) 0:00:18.753 ******* 2026-02-20 02:09:56.739506 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:09:56.739510 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:09:56.739514 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:09:56.739517 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:09:56.739521 | orchestrator | ok: [testbed-manager] 2026-02-20 02:09:56.739525 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:09:56.739528 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:09:56.739534 | orchestrator | 2026-02-20 02:09:56.739540 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:09:56.739546 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 02:09:56.739557 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:09:56.739573 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:09:56.739585 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:09:56.739591 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:09:56.739597 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:09:56.739603 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:09:56.739608 | orchestrator | 2026-02-20 02:09:56.739613 | orchestrator | 2026-02-20 02:09:56.739619 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:09:56.739625 | orchestrator | Friday 20 February 2026 02:09:56 +0000 (0:00:03.225) 0:00:21.978 ******* 2026-02-20 02:09:56.739631 | orchestrator | =============================================================================== 2026-02-20 02:09:56.739637 | orchestrator | Install python3-docker -------------------------------------------------- 3.23s 2026-02-20 02:09:56.739643 | orchestrator | Run update-ca-certificates ---------------------------------------------- 2.84s 2026-02-20 02:09:56.739649 | orchestrator | Apply netplan configuration --------------------------------------------- 2.52s 2026-02-20 02:09:56.739654 | orchestrator | Apply netplan configuration --------------------------------------------- 2.03s 2026-02-20 02:09:56.739660 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.01s 2026-02-20 02:09:56.739666 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.75s 2026-02-20 02:09:56.739673 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.70s 2026-02-20 02:09:56.739679 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.69s 2026-02-20 02:09:56.739685 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.58s 2026-02-20 02:09:56.739691 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.98s 2026-02-20 02:09:56.739697 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.82s 2026-02-20 02:09:56.739712 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.71s 2026-02-20 02:09:57.684081 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-20 02:10:10.009628 | orchestrator | 2026-02-20 02:10:10 | INFO  | Task 6cc947a2-a317-4616-9967-02ef6e622d69 (reboot) was prepared for execution. 2026-02-20 02:10:10.009740 | orchestrator | 2026-02-20 02:10:10 | INFO  | It takes a moment until task 6cc947a2-a317-4616-9967-02ef6e622d69 (reboot) has been started and output is visible here. 2026-02-20 02:10:21.163189 | orchestrator | 2026-02-20 02:10:21.163240 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-20 02:10:21.163250 | orchestrator | 2026-02-20 02:10:21.163257 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-20 02:10:21.163264 | orchestrator | Friday 20 February 2026 02:10:14 +0000 (0:00:00.216) 0:00:00.216 ******* 2026-02-20 02:10:21.163270 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:10:21.163278 | orchestrator | 2026-02-20 02:10:21.163284 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-20 02:10:21.163291 | orchestrator | Friday 20 February 2026 02:10:14 +0000 (0:00:00.100) 0:00:00.316 ******* 2026-02-20 02:10:21.163297 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:10:21.163304 | orchestrator | 2026-02-20 02:10:21.163310 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-20 02:10:21.163330 | orchestrator | Friday 20 February 2026 02:10:15 +0000 (0:00:01.050) 0:00:01.366 ******* 2026-02-20 02:10:21.163337 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:10:21.163343 | orchestrator | 2026-02-20 02:10:21.163349 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-20 02:10:21.163355 | orchestrator | 2026-02-20 02:10:21.163359 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-20 02:10:21.163363 | orchestrator | Friday 20 February 2026 02:10:15 +0000 (0:00:00.115) 0:00:01.482 ******* 2026-02-20 02:10:21.163367 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:10:21.163373 | orchestrator | 2026-02-20 02:10:21.163383 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-20 02:10:21.163389 | orchestrator | Friday 20 February 2026 02:10:16 +0000 (0:00:00.111) 0:00:01.593 ******* 2026-02-20 02:10:21.163395 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:10:21.163401 | orchestrator | 2026-02-20 02:10:21.163406 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-20 02:10:21.163412 | orchestrator | Friday 20 February 2026 02:10:16 +0000 (0:00:00.688) 0:00:02.281 ******* 2026-02-20 02:10:21.163418 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:10:21.163424 | orchestrator | 2026-02-20 02:10:21.163429 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-20 02:10:21.163435 | orchestrator | 2026-02-20 02:10:21.163440 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-20 02:10:21.163461 | orchestrator | Friday 20 February 2026 02:10:16 +0000 (0:00:00.112) 0:00:02.394 ******* 2026-02-20 02:10:21.163468 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:10:21.163473 | orchestrator | 2026-02-20 02:10:21.163479 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-20 02:10:21.163485 | orchestrator | Friday 20 February 2026 02:10:17 +0000 (0:00:00.228) 0:00:02.622 ******* 2026-02-20 02:10:21.163491 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:10:21.163497 | orchestrator | 2026-02-20 02:10:21.163509 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-20 02:10:21.163517 | orchestrator | Friday 20 February 2026 02:10:17 +0000 (0:00:00.686) 0:00:03.308 ******* 2026-02-20 02:10:21.163524 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:10:21.163530 | orchestrator | 2026-02-20 02:10:21.163536 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-20 02:10:21.163543 | orchestrator | 2026-02-20 02:10:21.163549 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-20 02:10:21.163555 | orchestrator | Friday 20 February 2026 02:10:17 +0000 (0:00:00.131) 0:00:03.440 ******* 2026-02-20 02:10:21.163561 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:10:21.163567 | orchestrator | 2026-02-20 02:10:21.163572 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-20 02:10:21.163578 | orchestrator | Friday 20 February 2026 02:10:18 +0000 (0:00:00.107) 0:00:03.547 ******* 2026-02-20 02:10:21.163585 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:10:21.163591 | orchestrator | 2026-02-20 02:10:21.163597 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-20 02:10:21.163604 | orchestrator | Friday 20 February 2026 02:10:18 +0000 (0:00:00.718) 0:00:04.265 ******* 2026-02-20 02:10:21.163608 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:10:21.163612 | orchestrator | 2026-02-20 02:10:21.163615 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-20 02:10:21.163619 | orchestrator | 2026-02-20 02:10:21.163623 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-20 02:10:21.163627 | orchestrator | Friday 20 February 2026 02:10:18 +0000 (0:00:00.127) 0:00:04.393 ******* 2026-02-20 02:10:21.163630 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:10:21.163634 | orchestrator | 2026-02-20 02:10:21.163638 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-20 02:10:21.163647 | orchestrator | Friday 20 February 2026 02:10:18 +0000 (0:00:00.105) 0:00:04.499 ******* 2026-02-20 02:10:21.163651 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:10:21.163655 | orchestrator | 2026-02-20 02:10:21.163658 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-20 02:10:21.163662 | orchestrator | Friday 20 February 2026 02:10:19 +0000 (0:00:00.731) 0:00:05.230 ******* 2026-02-20 02:10:21.163666 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:10:21.163670 | orchestrator | 2026-02-20 02:10:21.163674 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-20 02:10:21.163677 | orchestrator | 2026-02-20 02:10:21.163681 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-20 02:10:21.163685 | orchestrator | Friday 20 February 2026 02:10:19 +0000 (0:00:00.167) 0:00:05.398 ******* 2026-02-20 02:10:21.163688 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:10:21.163692 | orchestrator | 2026-02-20 02:10:21.163696 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-20 02:10:21.163700 | orchestrator | Friday 20 February 2026 02:10:20 +0000 (0:00:00.118) 0:00:05.516 ******* 2026-02-20 02:10:21.163703 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:10:21.163707 | orchestrator | 2026-02-20 02:10:21.163711 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-20 02:10:21.163715 | orchestrator | Friday 20 February 2026 02:10:20 +0000 (0:00:00.679) 0:00:06.196 ******* 2026-02-20 02:10:21.163736 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:10:21.163745 | orchestrator | 2026-02-20 02:10:21.163751 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:10:21.163757 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:10:21.163764 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:10:21.163771 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:10:21.163778 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:10:21.163785 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:10:21.163792 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:10:21.163798 | orchestrator | 2026-02-20 02:10:21.163805 | orchestrator | 2026-02-20 02:10:21.163810 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:10:21.163815 | orchestrator | Friday 20 February 2026 02:10:20 +0000 (0:00:00.029) 0:00:06.225 ******* 2026-02-20 02:10:21.163819 | orchestrator | =============================================================================== 2026-02-20 02:10:21.163823 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.55s 2026-02-20 02:10:21.163829 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.77s 2026-02-20 02:10:21.163836 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.68s 2026-02-20 02:10:21.524356 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-20 02:10:33.741941 | orchestrator | 2026-02-20 02:10:33 | INFO  | Task 99b7abce-dfa4-4720-ad85-3f9bf23111b3 (wait-for-connection) was prepared for execution. 2026-02-20 02:10:33.742011 | orchestrator | 2026-02-20 02:10:33 | INFO  | It takes a moment until task 99b7abce-dfa4-4720-ad85-3f9bf23111b3 (wait-for-connection) has been started and output is visible here. 2026-02-20 02:11:01.220827 | orchestrator | 2026-02-20 02:11:01.220922 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-20 02:11:01.220933 | orchestrator | 2026-02-20 02:11:01.220944 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-20 02:11:01.220958 | orchestrator | Friday 20 February 2026 02:10:38 +0000 (0:00:00.310) 0:00:00.310 ******* 2026-02-20 02:11:01.220972 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:11:01.220986 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:11:01.220999 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:11:01.221012 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:11:01.221025 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:11:01.221038 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:11:01.221052 | orchestrator | 2026-02-20 02:11:01.221065 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:11:01.221080 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:11:01.221096 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:11:01.221110 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:11:01.221124 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:11:01.221138 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:11:01.221152 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:11:01.221167 | orchestrator | 2026-02-20 02:11:01.221181 | orchestrator | 2026-02-20 02:11:01.221194 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:11:01.221208 | orchestrator | Friday 20 February 2026 02:11:00 +0000 (0:00:22.365) 0:00:22.676 ******* 2026-02-20 02:11:01.221221 | orchestrator | =============================================================================== 2026-02-20 02:11:01.221235 | orchestrator | Wait until remote system is reachable ---------------------------------- 22.37s 2026-02-20 02:11:01.603296 | orchestrator | + osism apply hddtemp 2026-02-20 02:11:13.900825 | orchestrator | 2026-02-20 02:11:13 | INFO  | Task a4670a67-5f9b-48f9-9def-9bf03b881047 (hddtemp) was prepared for execution. 2026-02-20 02:11:13.900914 | orchestrator | 2026-02-20 02:11:13 | INFO  | It takes a moment until task a4670a67-5f9b-48f9-9def-9bf03b881047 (hddtemp) has been started and output is visible here. 2026-02-20 02:11:44.385353 | orchestrator | 2026-02-20 02:11:44.385428 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-20 02:11:44.385440 | orchestrator | 2026-02-20 02:11:44.385449 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-20 02:11:44.385457 | orchestrator | Friday 20 February 2026 02:11:18 +0000 (0:00:00.309) 0:00:00.309 ******* 2026-02-20 02:11:44.385464 | orchestrator | ok: [testbed-manager] 2026-02-20 02:11:44.385473 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:11:44.385480 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:11:44.385487 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:11:44.385493 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:11:44.385500 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:11:44.385507 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:11:44.385515 | orchestrator | 2026-02-20 02:11:44.385523 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-20 02:11:44.385530 | orchestrator | Friday 20 February 2026 02:11:19 +0000 (0:00:00.802) 0:00:01.111 ******* 2026-02-20 02:11:44.385539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:11:44.385575 | orchestrator | 2026-02-20 02:11:44.385583 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-20 02:11:44.385590 | orchestrator | Friday 20 February 2026 02:11:20 +0000 (0:00:01.299) 0:00:02.411 ******* 2026-02-20 02:11:44.385597 | orchestrator | ok: [testbed-manager] 2026-02-20 02:11:44.385604 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:11:44.385611 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:11:44.385617 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:11:44.385625 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:11:44.385633 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:11:44.385640 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:11:44.385647 | orchestrator | 2026-02-20 02:11:44.385654 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-20 02:11:44.385662 | orchestrator | Friday 20 February 2026 02:11:22 +0000 (0:00:02.089) 0:00:04.500 ******* 2026-02-20 02:11:44.385669 | orchestrator | changed: [testbed-manager] 2026-02-20 02:11:44.385677 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:11:44.385684 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:11:44.385692 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:11:44.385698 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:11:44.385705 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:11:44.385713 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:11:44.385721 | orchestrator | 2026-02-20 02:11:44.385729 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-20 02:11:44.385736 | orchestrator | Friday 20 February 2026 02:11:24 +0000 (0:00:01.370) 0:00:05.870 ******* 2026-02-20 02:11:44.385743 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:11:44.385750 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:11:44.385756 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:11:44.385763 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:11:44.385770 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:11:44.385792 | orchestrator | ok: [testbed-manager] 2026-02-20 02:11:44.385800 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:11:44.385807 | orchestrator | 2026-02-20 02:11:44.385814 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-20 02:11:44.385821 | orchestrator | Friday 20 February 2026 02:11:26 +0000 (0:00:02.243) 0:00:08.113 ******* 2026-02-20 02:11:44.385828 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:11:44.385835 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:11:44.385842 | orchestrator | changed: [testbed-manager] 2026-02-20 02:11:44.385849 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:11:44.385856 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:11:44.385862 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:11:44.385870 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:11:44.385876 | orchestrator | 2026-02-20 02:11:44.385883 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-20 02:11:44.385890 | orchestrator | Friday 20 February 2026 02:11:27 +0000 (0:00:01.034) 0:00:09.148 ******* 2026-02-20 02:11:44.385898 | orchestrator | changed: [testbed-manager] 2026-02-20 02:11:44.385905 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:11:44.385913 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:11:44.385919 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:11:44.385926 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:11:44.385932 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:11:44.385939 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:11:44.385948 | orchestrator | 2026-02-20 02:11:44.385956 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-20 02:11:44.385964 | orchestrator | Friday 20 February 2026 02:11:40 +0000 (0:00:12.900) 0:00:22.048 ******* 2026-02-20 02:11:44.385973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:11:44.385994 | orchestrator | 2026-02-20 02:11:44.386003 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-20 02:11:44.386070 | orchestrator | Friday 20 February 2026 02:11:41 +0000 (0:00:01.536) 0:00:23.585 ******* 2026-02-20 02:11:44.386083 | orchestrator | changed: [testbed-manager] 2026-02-20 02:11:44.386092 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:11:44.386101 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:11:44.386110 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:11:44.386119 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:11:44.386126 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:11:44.386135 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:11:44.386143 | orchestrator | 2026-02-20 02:11:44.386152 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:11:44.386161 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:11:44.386194 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 02:11:44.386204 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 02:11:44.386214 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 02:11:44.386224 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 02:11:44.386233 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 02:11:44.386242 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 02:11:44.386252 | orchestrator | 2026-02-20 02:11:44.386260 | orchestrator | 2026-02-20 02:11:44.386267 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:11:44.386275 | orchestrator | Friday 20 February 2026 02:11:43 +0000 (0:00:02.004) 0:00:25.589 ******* 2026-02-20 02:11:44.386283 | orchestrator | =============================================================================== 2026-02-20 02:11:44.386290 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.90s 2026-02-20 02:11:44.386299 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.24s 2026-02-20 02:11:44.386306 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.09s 2026-02-20 02:11:44.386314 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.00s 2026-02-20 02:11:44.386323 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.54s 2026-02-20 02:11:44.386332 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.37s 2026-02-20 02:11:44.386340 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.30s 2026-02-20 02:11:44.386347 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 1.03s 2026-02-20 02:11:44.386355 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.80s 2026-02-20 02:11:44.752275 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-20 02:11:44.815146 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-20 02:11:44.815268 | orchestrator | + sudo systemctl restart manager.service 2026-02-20 02:11:58.848323 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-20 02:11:58.848480 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-20 02:11:58.848515 | orchestrator | + local max_attempts=60 2026-02-20 02:11:58.848523 | orchestrator | + local name=ceph-ansible 2026-02-20 02:11:58.848529 | orchestrator | + local attempt_num=1 2026-02-20 02:11:58.848536 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-20 02:11:58.894410 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-20 02:11:58.894477 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-20 02:11:58.894483 | orchestrator | + sleep 5 2026-02-20 02:12:03.898517 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-20 02:12:03.954644 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-20 02:12:03.954750 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-20 02:12:03.954766 | orchestrator | + sleep 5 2026-02-20 02:12:08.957224 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-20 02:12:08.992510 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-20 02:12:08.992619 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-20 02:12:08.992643 | orchestrator | + sleep 5 2026-02-20 02:12:13.996069 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-20 02:12:14.036078 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-20 02:12:14.036175 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-20 02:12:14.036187 | orchestrator | + sleep 5 2026-02-20 02:12:19.041666 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-20 02:12:19.084207 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-20 02:12:19.084284 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-20 02:12:19.084291 | orchestrator | + sleep 5 2026-02-20 02:12:24.089894 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-20 02:12:24.131584 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-20 02:12:24.131652 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-20 02:12:24.131657 | orchestrator | + sleep 5 2026-02-20 02:12:29.137071 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-20 02:12:29.173707 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-20 02:12:29.173786 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-20 02:12:29.173792 | orchestrator | + sleep 5 2026-02-20 02:12:34.177355 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-20 02:12:34.227070 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-20 02:12:34.227143 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-20 02:12:34.227150 | orchestrator | + sleep 5 2026-02-20 02:12:39.231238 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-20 02:12:39.282230 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-20 02:12:39.282328 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-20 02:12:39.282337 | orchestrator | + sleep 5 2026-02-20 02:12:44.286389 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-20 02:12:44.330852 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-20 02:12:44.330923 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-20 02:12:44.330929 | orchestrator | + sleep 5 2026-02-20 02:12:49.334666 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-20 02:12:49.367512 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-20 02:12:49.367572 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-20 02:12:49.367578 | orchestrator | + sleep 5 2026-02-20 02:12:54.372286 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-20 02:12:54.418894 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-20 02:12:54.418986 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-20 02:12:54.418996 | orchestrator | + sleep 5 2026-02-20 02:12:59.422168 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-20 02:12:59.459421 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-20 02:12:59.459505 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-20 02:12:59.459513 | orchestrator | + sleep 5 2026-02-20 02:13:04.462819 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-20 02:13:04.498170 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-20 02:13:04.498260 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-20 02:13:04.498269 | orchestrator | + local max_attempts=60 2026-02-20 02:13:04.498321 | orchestrator | + local name=kolla-ansible 2026-02-20 02:13:04.498337 | orchestrator | + local attempt_num=1 2026-02-20 02:13:04.499007 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-20 02:13:04.533824 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-20 02:13:04.533895 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-20 02:13:04.533924 | orchestrator | + local max_attempts=60 2026-02-20 02:13:04.533929 | orchestrator | + local name=osism-ansible 2026-02-20 02:13:04.533933 | orchestrator | + local attempt_num=1 2026-02-20 02:13:04.534147 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-20 02:13:04.563777 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-20 02:13:04.563844 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-20 02:13:04.563861 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-20 02:13:04.739646 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-20 02:13:04.912668 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-20 02:13:05.092023 | orchestrator | ARA in osism-ansible already disabled. 2026-02-20 02:13:05.235762 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-20 02:13:05.236875 | orchestrator | + osism apply gather-facts 2026-02-20 02:13:17.643346 | orchestrator | 2026-02-20 02:13:17 | INFO  | Task 1e2372d5-7645-403c-b385-289669297850 (gather-facts) was prepared for execution. 2026-02-20 02:13:17.643454 | orchestrator | 2026-02-20 02:13:17 | INFO  | It takes a moment until task 1e2372d5-7645-403c-b385-289669297850 (gather-facts) has been started and output is visible here. 2026-02-20 02:13:31.535527 | orchestrator | 2026-02-20 02:13:31.535626 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-20 02:13:31.535637 | orchestrator | 2026-02-20 02:13:31.535644 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-20 02:13:31.535650 | orchestrator | Friday 20 February 2026 02:13:22 +0000 (0:00:00.232) 0:00:00.232 ******* 2026-02-20 02:13:31.535657 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:13:31.535664 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:13:31.535670 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:13:31.535676 | orchestrator | ok: [testbed-manager] 2026-02-20 02:13:31.535682 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:13:31.535688 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:13:31.535694 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:13:31.535700 | orchestrator | 2026-02-20 02:13:31.535707 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-20 02:13:31.535713 | orchestrator | 2026-02-20 02:13:31.535719 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-20 02:13:31.535727 | orchestrator | Friday 20 February 2026 02:13:30 +0000 (0:00:08.438) 0:00:08.670 ******* 2026-02-20 02:13:31.535733 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:13:31.535743 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:13:31.535750 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:13:31.535758 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:13:31.535763 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:13:31.535770 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:13:31.535776 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:13:31.535782 | orchestrator | 2026-02-20 02:13:31.535788 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:13:31.535794 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 02:13:31.535801 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 02:13:31.535807 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 02:13:31.535814 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 02:13:31.535819 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 02:13:31.535825 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 02:13:31.535857 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 02:13:31.535863 | orchestrator | 2026-02-20 02:13:31.535869 | orchestrator | 2026-02-20 02:13:31.535876 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:13:31.535882 | orchestrator | Friday 20 February 2026 02:13:31 +0000 (0:00:00.571) 0:00:09.242 ******* 2026-02-20 02:13:31.535889 | orchestrator | =============================================================================== 2026-02-20 02:13:31.535895 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.44s 2026-02-20 02:13:31.535901 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2026-02-20 02:13:31.893849 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-20 02:13:31.909837 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-20 02:13:31.921441 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-20 02:13:31.947157 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-20 02:13:31.966758 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-20 02:13:31.978903 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-20 02:13:31.993907 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-20 02:13:32.005169 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-20 02:13:32.015910 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-20 02:13:32.027348 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-20 02:13:32.046298 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-20 02:13:32.063507 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-20 02:13:32.082668 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-20 02:13:32.107955 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-20 02:13:32.124585 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-20 02:13:32.140651 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-20 02:13:32.160686 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-20 02:13:32.176605 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-20 02:13:32.198059 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-20 02:13:32.210619 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-20 02:13:32.226149 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-20 02:13:32.240356 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-20 02:13:32.256763 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-20 02:13:32.275290 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-20 02:13:32.450895 | orchestrator | ok: Runtime: 0:26:08.662639 2026-02-20 02:13:32.549502 | 2026-02-20 02:13:32.549640 | TASK [Deploy services] 2026-02-20 02:13:33.266450 | orchestrator | 2026-02-20 02:13:33.266620 | orchestrator | # DEPLOY SERVICES 2026-02-20 02:13:33.266643 | orchestrator | 2026-02-20 02:13:33.266654 | orchestrator | + set -e 2026-02-20 02:13:33.266665 | orchestrator | + echo 2026-02-20 02:13:33.266676 | orchestrator | + echo '# DEPLOY SERVICES' 2026-02-20 02:13:33.266687 | orchestrator | + echo 2026-02-20 02:13:33.266736 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-20 02:13:33.266756 | orchestrator | ++ export INTERACTIVE=false 2026-02-20 02:13:33.266768 | orchestrator | ++ INTERACTIVE=false 2026-02-20 02:13:33.266778 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-20 02:13:33.266795 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-20 02:13:33.266804 | orchestrator | + source /opt/manager-vars.sh 2026-02-20 02:13:33.266816 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-20 02:13:33.266825 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-20 02:13:33.266851 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-20 02:13:33.266861 | orchestrator | ++ CEPH_VERSION=reef 2026-02-20 02:13:33.266873 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-20 02:13:33.266882 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-20 02:13:33.266894 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-20 02:13:33.266903 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-20 02:13:33.266912 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-20 02:13:33.266930 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-20 02:13:33.266939 | orchestrator | ++ export ARA=false 2026-02-20 02:13:33.266948 | orchestrator | ++ ARA=false 2026-02-20 02:13:33.266957 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-20 02:13:33.266966 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-20 02:13:33.266974 | orchestrator | ++ export TEMPEST=false 2026-02-20 02:13:33.266983 | orchestrator | ++ TEMPEST=false 2026-02-20 02:13:33.266991 | orchestrator | ++ export IS_ZUUL=true 2026-02-20 02:13:33.267000 | orchestrator | ++ IS_ZUUL=true 2026-02-20 02:13:33.267009 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 02:13:33.267018 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 02:13:33.267027 | orchestrator | ++ export EXTERNAL_API=false 2026-02-20 02:13:33.267035 | orchestrator | ++ EXTERNAL_API=false 2026-02-20 02:13:33.267047 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-20 02:13:33.267063 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-20 02:13:33.267083 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-20 02:13:33.267106 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-20 02:13:33.267274 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-20 02:13:33.267297 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-20 02:13:33.267399 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-20 02:13:33.274934 | orchestrator | 2026-02-20 02:13:33.275045 | orchestrator | # PULL IMAGES 2026-02-20 02:13:33.275059 | orchestrator | 2026-02-20 02:13:33.275072 | orchestrator | + set -e 2026-02-20 02:13:33.275083 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-20 02:13:33.275098 | orchestrator | ++ export INTERACTIVE=false 2026-02-20 02:13:33.275110 | orchestrator | ++ INTERACTIVE=false 2026-02-20 02:13:33.275122 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-20 02:13:33.275133 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-20 02:13:33.275144 | orchestrator | + source /opt/manager-vars.sh 2026-02-20 02:13:33.275155 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-20 02:13:33.275167 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-20 02:13:33.275177 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-20 02:13:33.275252 | orchestrator | ++ CEPH_VERSION=reef 2026-02-20 02:13:33.275265 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-20 02:13:33.275276 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-20 02:13:33.275287 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-20 02:13:33.275298 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-20 02:13:33.275308 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-20 02:13:33.275317 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-20 02:13:33.275327 | orchestrator | ++ export ARA=false 2026-02-20 02:13:33.275337 | orchestrator | ++ ARA=false 2026-02-20 02:13:33.275351 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-20 02:13:33.275361 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-20 02:13:33.275371 | orchestrator | ++ export TEMPEST=false 2026-02-20 02:13:33.275380 | orchestrator | ++ TEMPEST=false 2026-02-20 02:13:33.275390 | orchestrator | ++ export IS_ZUUL=true 2026-02-20 02:13:33.275400 | orchestrator | ++ IS_ZUUL=true 2026-02-20 02:13:33.275409 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 02:13:33.275458 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 02:13:33.275469 | orchestrator | ++ export EXTERNAL_API=false 2026-02-20 02:13:33.275479 | orchestrator | ++ EXTERNAL_API=false 2026-02-20 02:13:33.275489 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-20 02:13:33.275498 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-20 02:13:33.275537 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-20 02:13:33.275548 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-20 02:13:33.275557 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-20 02:13:33.275567 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-20 02:13:33.275577 | orchestrator | + echo 2026-02-20 02:13:33.275587 | orchestrator | + echo '# PULL IMAGES' 2026-02-20 02:13:33.275596 | orchestrator | + echo 2026-02-20 02:13:33.275786 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-20 02:13:33.346657 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-20 02:13:33.346756 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-20 02:13:35.384616 | orchestrator | 2026-02-20 02:13:35 | INFO  | Trying to run play pull-images in environment custom 2026-02-20 02:13:45.574011 | orchestrator | 2026-02-20 02:13:45 | INFO  | Task 67708996-e34d-49bc-bd6d-cfa02f7b6f77 (pull-images) was prepared for execution. 2026-02-20 02:13:45.574259 | orchestrator | 2026-02-20 02:13:45 | INFO  | Task 67708996-e34d-49bc-bd6d-cfa02f7b6f77 is running in background. No more output. Check ARA for logs. 2026-02-20 02:13:45.918217 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-02-20 02:13:58.160444 | orchestrator | 2026-02-20 02:13:58 | INFO  | Task d711e9d1-35a8-4525-878d-37643834350f (cgit) was prepared for execution. 2026-02-20 02:13:58.160572 | orchestrator | 2026-02-20 02:13:58 | INFO  | Task d711e9d1-35a8-4525-878d-37643834350f is running in background. No more output. Check ARA for logs. 2026-02-20 02:14:11.625928 | orchestrator | 2026-02-20 02:14:11 | INFO  | Task f35967fd-0589-48dd-be3b-85eac3c4d238 (dotfiles) was prepared for execution. 2026-02-20 02:14:11.626137 | orchestrator | 2026-02-20 02:14:11 | INFO  | Task f35967fd-0589-48dd-be3b-85eac3c4d238 is running in background. No more output. Check ARA for logs. 2026-02-20 02:14:24.335243 | orchestrator | 2026-02-20 02:14:24 | INFO  | Task b2b92527-351b-441b-8da4-cb5c74f1aa9f (homer) was prepared for execution. 2026-02-20 02:14:24.335329 | orchestrator | 2026-02-20 02:14:24 | INFO  | Task b2b92527-351b-441b-8da4-cb5c74f1aa9f is running in background. No more output. Check ARA for logs. 2026-02-20 02:14:36.940394 | orchestrator | 2026-02-20 02:14:36 | INFO  | Task c05644da-1e92-4aef-a937-ddb9a04449e8 (phpmyadmin) was prepared for execution. 2026-02-20 02:14:36.940477 | orchestrator | 2026-02-20 02:14:36 | INFO  | Task c05644da-1e92-4aef-a937-ddb9a04449e8 is running in background. No more output. Check ARA for logs. 2026-02-20 02:14:49.772832 | orchestrator | 2026-02-20 02:14:49 | INFO  | Task 5160bbd8-1477-45a2-8216-6e9172a2600c (sosreport) was prepared for execution. 2026-02-20 02:14:49.772923 | orchestrator | 2026-02-20 02:14:49 | INFO  | Task 5160bbd8-1477-45a2-8216-6e9172a2600c is running in background. No more output. Check ARA for logs. 2026-02-20 02:14:50.305457 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-02-20 02:14:50.315142 | orchestrator | + set -e 2026-02-20 02:14:50.315215 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-20 02:14:50.315224 | orchestrator | ++ export INTERACTIVE=false 2026-02-20 02:14:50.315230 | orchestrator | ++ INTERACTIVE=false 2026-02-20 02:14:50.315238 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-20 02:14:50.315243 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-20 02:14:50.315249 | orchestrator | + source /opt/manager-vars.sh 2026-02-20 02:14:50.315254 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-20 02:14:50.315259 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-20 02:14:50.315264 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-20 02:14:50.315269 | orchestrator | ++ CEPH_VERSION=reef 2026-02-20 02:14:50.315275 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-20 02:14:50.315280 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-20 02:14:50.315285 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-20 02:14:50.315291 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-20 02:14:50.315296 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-20 02:14:50.315301 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-20 02:14:50.315306 | orchestrator | ++ export ARA=false 2026-02-20 02:14:50.315311 | orchestrator | ++ ARA=false 2026-02-20 02:14:50.315317 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-20 02:14:50.315345 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-20 02:14:50.315350 | orchestrator | ++ export TEMPEST=false 2026-02-20 02:14:50.315355 | orchestrator | ++ TEMPEST=false 2026-02-20 02:14:50.315360 | orchestrator | ++ export IS_ZUUL=true 2026-02-20 02:14:50.315365 | orchestrator | ++ IS_ZUUL=true 2026-02-20 02:14:50.315383 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 02:14:50.315393 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 02:14:50.315398 | orchestrator | ++ export EXTERNAL_API=false 2026-02-20 02:14:50.315403 | orchestrator | ++ EXTERNAL_API=false 2026-02-20 02:14:50.315408 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-20 02:14:50.315413 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-20 02:14:50.315419 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-20 02:14:50.315424 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-20 02:14:50.315429 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-20 02:14:50.315434 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-20 02:14:50.315925 | orchestrator | ++ semver 9.5.0 8.0.3 2026-02-20 02:14:50.366614 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-20 02:14:50.366667 | orchestrator | + osism apply frr 2026-02-20 02:15:03.166985 | orchestrator | 2026-02-20 02:15:03 | INFO  | Task becde7cb-fbb8-47b1-8f2f-7610e355d072 (frr) was prepared for execution. 2026-02-20 02:15:03.167114 | orchestrator | 2026-02-20 02:15:03 | INFO  | It takes a moment until task becde7cb-fbb8-47b1-8f2f-7610e355d072 (frr) has been started and output is visible here. 2026-02-20 02:15:43.866801 | orchestrator | 2026-02-20 02:15:43.866884 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-20 02:15:43.866898 | orchestrator | 2026-02-20 02:15:43.866906 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-20 02:15:43.866920 | orchestrator | Friday 20 February 2026 02:15:12 +0000 (0:00:00.633) 0:00:00.633 ******* 2026-02-20 02:15:43.866927 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-20 02:15:43.866934 | orchestrator | 2026-02-20 02:15:43.866941 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-20 02:15:43.866947 | orchestrator | Friday 20 February 2026 02:15:13 +0000 (0:00:00.659) 0:00:01.293 ******* 2026-02-20 02:15:43.866954 | orchestrator | changed: [testbed-manager] 2026-02-20 02:15:43.866962 | orchestrator | 2026-02-20 02:15:43.866969 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-20 02:15:43.866977 | orchestrator | Friday 20 February 2026 02:15:17 +0000 (0:00:04.012) 0:00:05.305 ******* 2026-02-20 02:15:43.866984 | orchestrator | changed: [testbed-manager] 2026-02-20 02:15:43.866990 | orchestrator | 2026-02-20 02:15:43.866997 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-20 02:15:43.867004 | orchestrator | Friday 20 February 2026 02:15:30 +0000 (0:00:13.918) 0:00:19.224 ******* 2026-02-20 02:15:43.867010 | orchestrator | ok: [testbed-manager] 2026-02-20 02:15:43.867019 | orchestrator | 2026-02-20 02:15:43.867026 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-20 02:15:43.867032 | orchestrator | Friday 20 February 2026 02:15:32 +0000 (0:00:01.325) 0:00:20.550 ******* 2026-02-20 02:15:43.867038 | orchestrator | changed: [testbed-manager] 2026-02-20 02:15:43.867046 | orchestrator | 2026-02-20 02:15:43.867053 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-20 02:15:43.867060 | orchestrator | Friday 20 February 2026 02:15:33 +0000 (0:00:01.255) 0:00:21.805 ******* 2026-02-20 02:15:43.867067 | orchestrator | ok: [testbed-manager] 2026-02-20 02:15:43.867074 | orchestrator | 2026-02-20 02:15:43.867080 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-20 02:15:43.867088 | orchestrator | Friday 20 February 2026 02:15:34 +0000 (0:00:01.451) 0:00:23.257 ******* 2026-02-20 02:15:43.867095 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:15:43.867101 | orchestrator | 2026-02-20 02:15:43.867108 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-20 02:15:43.867115 | orchestrator | Friday 20 February 2026 02:15:35 +0000 (0:00:00.229) 0:00:23.486 ******* 2026-02-20 02:15:43.867143 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:15:43.867151 | orchestrator | 2026-02-20 02:15:43.867157 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-20 02:15:43.867163 | orchestrator | Friday 20 February 2026 02:15:35 +0000 (0:00:00.217) 0:00:23.704 ******* 2026-02-20 02:15:43.867170 | orchestrator | changed: [testbed-manager] 2026-02-20 02:15:43.867176 | orchestrator | 2026-02-20 02:15:43.867182 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-20 02:15:43.867189 | orchestrator | Friday 20 February 2026 02:15:36 +0000 (0:00:01.320) 0:00:25.025 ******* 2026-02-20 02:15:43.867196 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-20 02:15:43.867202 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-20 02:15:43.867211 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-20 02:15:43.867218 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-20 02:15:43.867224 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-20 02:15:43.867231 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-20 02:15:43.867237 | orchestrator | 2026-02-20 02:15:43.867244 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-20 02:15:43.867252 | orchestrator | Friday 20 February 2026 02:15:39 +0000 (0:00:02.841) 0:00:27.866 ******* 2026-02-20 02:15:43.867259 | orchestrator | ok: [testbed-manager] 2026-02-20 02:15:43.867266 | orchestrator | 2026-02-20 02:15:43.867273 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-20 02:15:43.867281 | orchestrator | Friday 20 February 2026 02:15:41 +0000 (0:00:02.040) 0:00:29.907 ******* 2026-02-20 02:15:43.867288 | orchestrator | changed: [testbed-manager] 2026-02-20 02:15:43.867294 | orchestrator | 2026-02-20 02:15:43.867302 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:15:43.867310 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:15:43.867317 | orchestrator | 2026-02-20 02:15:43.867323 | orchestrator | 2026-02-20 02:15:43.867338 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:15:43.867345 | orchestrator | Friday 20 February 2026 02:15:43 +0000 (0:00:01.773) 0:00:31.680 ******* 2026-02-20 02:15:43.867352 | orchestrator | =============================================================================== 2026-02-20 02:15:43.867359 | orchestrator | osism.services.frr : Install frr package ------------------------------- 13.92s 2026-02-20 02:15:43.867366 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 4.01s 2026-02-20 02:15:43.867373 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.84s 2026-02-20 02:15:43.867380 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.04s 2026-02-20 02:15:43.867387 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.77s 2026-02-20 02:15:43.867415 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.45s 2026-02-20 02:15:43.867422 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.33s 2026-02-20 02:15:43.867428 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.32s 2026-02-20 02:15:43.867435 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.26s 2026-02-20 02:15:43.867451 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.66s 2026-02-20 02:15:43.867460 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.23s 2026-02-20 02:15:43.867468 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.22s 2026-02-20 02:15:44.365525 | orchestrator | + osism apply kubernetes 2026-02-20 02:15:47.732255 | orchestrator | 2026-02-20 02:15:47 | INFO  | Task 7e6ddcb3-e5f6-4704-ad08-e2580784cdc6 (kubernetes) was prepared for execution. 2026-02-20 02:15:47.732326 | orchestrator | 2026-02-20 02:15:47 | INFO  | It takes a moment until task 7e6ddcb3-e5f6-4704-ad08-e2580784cdc6 (kubernetes) has been started and output is visible here. 2026-02-20 02:16:17.578709 | orchestrator | 2026-02-20 02:16:17.578803 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-20 02:16:17.578810 | orchestrator | 2026-02-20 02:16:17.578815 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-20 02:16:17.578820 | orchestrator | Friday 20 February 2026 02:15:53 +0000 (0:00:00.207) 0:00:00.207 ******* 2026-02-20 02:16:17.578824 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:16:17.578829 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:16:17.578833 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:16:17.578837 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:16:17.578841 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:16:17.578845 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:16:17.578849 | orchestrator | 2026-02-20 02:16:17.578853 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-20 02:16:17.578856 | orchestrator | Friday 20 February 2026 02:15:54 +0000 (0:00:00.881) 0:00:01.089 ******* 2026-02-20 02:16:17.578860 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:16:17.578865 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:16:17.578869 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:16:17.578872 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:16:17.578876 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:16:17.578880 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:16:17.578883 | orchestrator | 2026-02-20 02:16:17.578887 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-20 02:16:17.578893 | orchestrator | Friday 20 February 2026 02:15:54 +0000 (0:00:00.886) 0:00:01.975 ******* 2026-02-20 02:16:17.578897 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:16:17.578901 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:16:17.578904 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:16:17.578908 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:16:17.578912 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:16:17.578916 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:16:17.578919 | orchestrator | 2026-02-20 02:16:17.578923 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-20 02:16:17.578927 | orchestrator | Friday 20 February 2026 02:15:55 +0000 (0:00:00.962) 0:00:02.937 ******* 2026-02-20 02:16:17.578931 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:16:17.578934 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:16:17.578938 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:16:17.578944 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:16:17.578948 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:16:17.578952 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:16:17.578956 | orchestrator | 2026-02-20 02:16:17.578960 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-20 02:16:17.578964 | orchestrator | Friday 20 February 2026 02:15:58 +0000 (0:00:02.612) 0:00:05.550 ******* 2026-02-20 02:16:17.578967 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:16:17.578971 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:16:17.578975 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:16:17.578979 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:16:17.578982 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:16:17.578986 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:16:17.578990 | orchestrator | 2026-02-20 02:16:17.578994 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-20 02:16:17.578998 | orchestrator | Friday 20 February 2026 02:15:59 +0000 (0:00:01.389) 0:00:06.940 ******* 2026-02-20 02:16:17.579001 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:16:17.579034 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:16:17.579038 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:16:17.579042 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:16:17.579046 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:16:17.579050 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:16:17.579053 | orchestrator | 2026-02-20 02:16:17.579061 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-20 02:16:17.579065 | orchestrator | Friday 20 February 2026 02:16:01 +0000 (0:00:01.619) 0:00:08.559 ******* 2026-02-20 02:16:17.579069 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:16:17.579073 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:16:17.579076 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:16:17.579080 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:16:17.579084 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:16:17.579087 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:16:17.579091 | orchestrator | 2026-02-20 02:16:17.579095 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-20 02:16:17.579099 | orchestrator | Friday 20 February 2026 02:16:02 +0000 (0:00:00.957) 0:00:09.517 ******* 2026-02-20 02:16:17.579102 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:16:17.579106 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:16:17.579110 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:16:17.579113 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:16:17.579117 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:16:17.579121 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:16:17.579125 | orchestrator | 2026-02-20 02:16:17.579128 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-20 02:16:17.579132 | orchestrator | Friday 20 February 2026 02:16:03 +0000 (0:00:01.062) 0:00:10.580 ******* 2026-02-20 02:16:17.579136 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-20 02:16:17.579140 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-20 02:16:17.579143 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:16:17.579147 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-20 02:16:17.579151 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-20 02:16:17.579155 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:16:17.579158 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-20 02:16:17.579162 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-20 02:16:17.579166 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:16:17.579170 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-20 02:16:17.579183 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-20 02:16:17.579188 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:16:17.579191 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-20 02:16:17.579195 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-20 02:16:17.579199 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:16:17.579202 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-20 02:16:17.579206 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-20 02:16:17.579210 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:16:17.579214 | orchestrator | 2026-02-20 02:16:17.579217 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-20 02:16:17.579221 | orchestrator | Friday 20 February 2026 02:16:04 +0000 (0:00:00.634) 0:00:11.214 ******* 2026-02-20 02:16:17.579225 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:16:17.579229 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:16:17.579232 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:16:17.579239 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:16:17.579243 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:16:17.579247 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:16:17.579251 | orchestrator | 2026-02-20 02:16:17.579254 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-20 02:16:17.579259 | orchestrator | Friday 20 February 2026 02:16:05 +0000 (0:00:01.546) 0:00:12.760 ******* 2026-02-20 02:16:17.579263 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:16:17.579267 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:16:17.579270 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:16:17.579275 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:16:17.579279 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:16:17.579283 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:16:17.579287 | orchestrator | 2026-02-20 02:16:17.579292 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-20 02:16:17.579296 | orchestrator | Friday 20 February 2026 02:16:06 +0000 (0:00:00.989) 0:00:13.749 ******* 2026-02-20 02:16:17.579300 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:16:17.579304 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:16:17.579309 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:16:17.579313 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:16:17.579317 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:16:17.579322 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:16:17.579326 | orchestrator | 2026-02-20 02:16:17.579330 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-20 02:16:17.579334 | orchestrator | Friday 20 February 2026 02:16:12 +0000 (0:00:06.025) 0:00:19.775 ******* 2026-02-20 02:16:17.579338 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:16:17.579346 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:16:17.579350 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:16:17.579355 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:16:17.579359 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:16:17.579363 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:16:17.579368 | orchestrator | 2026-02-20 02:16:17.579372 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-20 02:16:17.579376 | orchestrator | Friday 20 February 2026 02:16:13 +0000 (0:00:01.079) 0:00:20.854 ******* 2026-02-20 02:16:17.579380 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:16:17.579385 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:16:17.579389 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:16:17.579393 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:16:17.579397 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:16:17.579401 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:16:17.579406 | orchestrator | 2026-02-20 02:16:17.579410 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-20 02:16:17.579415 | orchestrator | Friday 20 February 2026 02:16:15 +0000 (0:00:01.569) 0:00:22.424 ******* 2026-02-20 02:16:17.579419 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:16:17.579424 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:16:17.579428 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:16:17.579432 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:16:17.579436 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:16:17.579440 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:16:17.579444 | orchestrator | 2026-02-20 02:16:17.579449 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-20 02:16:17.579453 | orchestrator | Friday 20 February 2026 02:16:16 +0000 (0:00:01.020) 0:00:23.444 ******* 2026-02-20 02:16:17.579457 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-20 02:16:17.579466 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-20 02:16:17.579470 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:16:17.579474 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-20 02:16:17.579482 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-20 02:16:17.579486 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:16:17.579490 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-20 02:16:17.579494 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-20 02:16:17.579499 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:16:17.579503 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-20 02:16:17.579507 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-20 02:16:17.579511 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:16:17.579516 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-20 02:16:17.579520 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-20 02:16:17.579524 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:16:17.579528 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-20 02:16:17.579532 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-20 02:16:17.579537 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:16:17.579541 | orchestrator | 2026-02-20 02:16:17.579545 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-20 02:16:17.579552 | orchestrator | Friday 20 February 2026 02:16:17 +0000 (0:00:01.179) 0:00:24.624 ******* 2026-02-20 02:17:42.848260 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:17:42.848366 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:17:42.848382 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:17:42.848390 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:17:42.848397 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:17:42.848404 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:17:42.848410 | orchestrator | 2026-02-20 02:17:42.848418 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-20 02:17:42.848426 | orchestrator | Friday 20 February 2026 02:16:18 +0000 (0:00:00.902) 0:00:25.527 ******* 2026-02-20 02:17:42.848433 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:17:42.848439 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:17:42.848445 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:17:42.848452 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:17:42.848458 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:17:42.848464 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:17:42.848470 | orchestrator | 2026-02-20 02:17:42.848477 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-20 02:17:42.848483 | orchestrator | 2026-02-20 02:17:42.848489 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-20 02:17:42.848496 | orchestrator | Friday 20 February 2026 02:16:19 +0000 (0:00:01.534) 0:00:27.062 ******* 2026-02-20 02:17:42.848502 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:17:42.848510 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:17:42.848516 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:17:42.848522 | orchestrator | 2026-02-20 02:17:42.848528 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-20 02:17:42.848534 | orchestrator | Friday 20 February 2026 02:16:21 +0000 (0:00:01.509) 0:00:28.571 ******* 2026-02-20 02:17:42.848541 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:17:42.848547 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:17:42.848553 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:17:42.848559 | orchestrator | 2026-02-20 02:17:42.848588 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-20 02:17:42.848601 | orchestrator | Friday 20 February 2026 02:16:23 +0000 (0:00:01.532) 0:00:30.104 ******* 2026-02-20 02:17:42.848610 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:17:42.848621 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:17:42.848628 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:17:42.848635 | orchestrator | 2026-02-20 02:17:42.848641 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-20 02:17:42.848663 | orchestrator | Friday 20 February 2026 02:16:24 +0000 (0:00:01.010) 0:00:31.114 ******* 2026-02-20 02:17:42.848670 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:17:42.848676 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:17:42.848682 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:17:42.848688 | orchestrator | 2026-02-20 02:17:42.848694 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-20 02:17:42.848700 | orchestrator | Friday 20 February 2026 02:16:24 +0000 (0:00:00.780) 0:00:31.895 ******* 2026-02-20 02:17:42.848707 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:17:42.848713 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:17:42.848719 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:17:42.848725 | orchestrator | 2026-02-20 02:17:42.848731 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-20 02:17:42.848746 | orchestrator | Friday 20 February 2026 02:16:25 +0000 (0:00:00.313) 0:00:32.209 ******* 2026-02-20 02:17:42.848752 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:17:42.848759 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:17:42.848769 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:17:42.848779 | orchestrator | 2026-02-20 02:17:42.848788 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-20 02:17:42.848798 | orchestrator | Friday 20 February 2026 02:16:26 +0000 (0:00:00.930) 0:00:33.139 ******* 2026-02-20 02:17:42.848809 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:17:42.848819 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:17:42.848829 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:17:42.848839 | orchestrator | 2026-02-20 02:17:42.848850 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-20 02:17:42.848861 | orchestrator | Friday 20 February 2026 02:16:27 +0000 (0:00:01.485) 0:00:34.625 ******* 2026-02-20 02:17:42.848870 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:17:42.848877 | orchestrator | 2026-02-20 02:17:42.848883 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-20 02:17:42.848889 | orchestrator | Friday 20 February 2026 02:16:28 +0000 (0:00:00.510) 0:00:35.135 ******* 2026-02-20 02:17:42.848895 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:17:42.848901 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:17:42.848907 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:17:42.848913 | orchestrator | 2026-02-20 02:17:42.848919 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-20 02:17:42.848926 | orchestrator | Friday 20 February 2026 02:16:29 +0000 (0:00:01.854) 0:00:36.990 ******* 2026-02-20 02:17:42.848932 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:17:42.848938 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:17:42.848944 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:17:42.848950 | orchestrator | 2026-02-20 02:17:42.848956 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-20 02:17:42.848962 | orchestrator | Friday 20 February 2026 02:16:30 +0000 (0:00:00.857) 0:00:37.847 ******* 2026-02-20 02:17:42.848968 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:17:42.848974 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:17:42.848980 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:17:42.848986 | orchestrator | 2026-02-20 02:17:42.848992 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-20 02:17:42.848998 | orchestrator | Friday 20 February 2026 02:16:31 +0000 (0:00:01.026) 0:00:38.874 ******* 2026-02-20 02:17:42.849004 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:17:42.849010 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:17:42.849016 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:17:42.849023 | orchestrator | 2026-02-20 02:17:42.849029 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-20 02:17:42.849050 | orchestrator | Friday 20 February 2026 02:16:33 +0000 (0:00:01.530) 0:00:40.405 ******* 2026-02-20 02:17:42.849056 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:17:42.849069 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:17:42.849076 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:17:42.849082 | orchestrator | 2026-02-20 02:17:42.849088 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-20 02:17:42.849094 | orchestrator | Friday 20 February 2026 02:16:33 +0000 (0:00:00.663) 0:00:41.068 ******* 2026-02-20 02:17:42.849100 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:17:42.849106 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:17:42.849112 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:17:42.849118 | orchestrator | 2026-02-20 02:17:42.849124 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-20 02:17:42.849130 | orchestrator | Friday 20 February 2026 02:16:34 +0000 (0:00:00.350) 0:00:41.419 ******* 2026-02-20 02:17:42.849136 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:17:42.849142 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:17:42.849148 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:17:42.849154 | orchestrator | 2026-02-20 02:17:42.849165 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-20 02:17:42.849171 | orchestrator | Friday 20 February 2026 02:16:35 +0000 (0:00:01.289) 0:00:42.708 ******* 2026-02-20 02:17:42.849177 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:17:42.849183 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:17:42.849190 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:17:42.849196 | orchestrator | 2026-02-20 02:17:42.849202 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-20 02:17:42.849208 | orchestrator | Friday 20 February 2026 02:16:38 +0000 (0:00:02.448) 0:00:45.156 ******* 2026-02-20 02:17:42.849214 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:17:42.849221 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:17:42.849227 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:17:42.849236 | orchestrator | 2026-02-20 02:17:42.849243 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-20 02:17:42.849249 | orchestrator | Friday 20 February 2026 02:16:38 +0000 (0:00:00.349) 0:00:45.506 ******* 2026-02-20 02:17:42.849256 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-20 02:17:42.849264 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-20 02:17:42.849271 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-20 02:17:42.849277 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-20 02:17:42.849283 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-20 02:17:42.849289 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-20 02:17:42.849295 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-20 02:17:42.849302 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-20 02:17:42.849308 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-20 02:17:42.849314 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-20 02:17:42.849320 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-20 02:17:42.849391 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-20 02:17:42.849398 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-20 02:17:42.849404 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-20 02:17:42.849410 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-20 02:17:42.849416 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left). 2026-02-20 02:17:42.849427 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left). 2026-02-20 02:17:42.849433 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left). 2026-02-20 02:17:42.849439 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:17:42.849445 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:17:42.849452 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:17:42.849458 | orchestrator | 2026-02-20 02:17:42.849469 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-20 02:18:26.603705 | orchestrator | Friday 20 February 2026 02:17:42 +0000 (0:01:04.383) 0:01:49.890 ******* 2026-02-20 02:18:26.603818 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:18:26.603834 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:18:26.603846 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:18:26.603857 | orchestrator | 2026-02-20 02:18:26.603869 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-20 02:18:26.603881 | orchestrator | Friday 20 February 2026 02:17:43 +0000 (0:00:00.324) 0:01:50.214 ******* 2026-02-20 02:18:26.603892 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:18:26.603903 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:18:26.603914 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:18:26.603925 | orchestrator | 2026-02-20 02:18:26.603936 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-20 02:18:26.603947 | orchestrator | Friday 20 February 2026 02:17:44 +0000 (0:00:01.052) 0:01:51.267 ******* 2026-02-20 02:18:26.603958 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:18:26.603969 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:18:26.603979 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:18:26.603991 | orchestrator | 2026-02-20 02:18:26.604011 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-20 02:18:26.604031 | orchestrator | Friday 20 February 2026 02:17:45 +0000 (0:00:01.220) 0:01:52.487 ******* 2026-02-20 02:18:26.604051 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:18:26.604095 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:18:26.604112 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:18:26.604128 | orchestrator | 2026-02-20 02:18:26.604146 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-20 02:18:26.604165 | orchestrator | Friday 20 February 2026 02:18:11 +0000 (0:00:26.225) 0:02:18.713 ******* 2026-02-20 02:18:26.604184 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:18:26.604204 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:18:26.604223 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:18:26.604242 | orchestrator | 2026-02-20 02:18:26.604261 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-20 02:18:26.604281 | orchestrator | Friday 20 February 2026 02:18:12 +0000 (0:00:00.721) 0:02:19.435 ******* 2026-02-20 02:18:26.604299 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:18:26.604317 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:18:26.604335 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:18:26.604353 | orchestrator | 2026-02-20 02:18:26.604405 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-20 02:18:26.604426 | orchestrator | Friday 20 February 2026 02:18:13 +0000 (0:00:00.676) 0:02:20.111 ******* 2026-02-20 02:18:26.604446 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:18:26.604463 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:18:26.604476 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:18:26.604515 | orchestrator | 2026-02-20 02:18:26.604526 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-20 02:18:26.604537 | orchestrator | Friday 20 February 2026 02:18:13 +0000 (0:00:00.650) 0:02:20.762 ******* 2026-02-20 02:18:26.604548 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:18:26.604559 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:18:26.604570 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:18:26.604580 | orchestrator | 2026-02-20 02:18:26.604591 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-20 02:18:26.604602 | orchestrator | Friday 20 February 2026 02:18:14 +0000 (0:00:00.834) 0:02:21.596 ******* 2026-02-20 02:18:26.604613 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:18:26.604623 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:18:26.604634 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:18:26.604644 | orchestrator | 2026-02-20 02:18:26.604655 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-20 02:18:26.604666 | orchestrator | Friday 20 February 2026 02:18:14 +0000 (0:00:00.309) 0:02:21.905 ******* 2026-02-20 02:18:26.604677 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:18:26.604687 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:18:26.604698 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:18:26.604709 | orchestrator | 2026-02-20 02:18:26.604719 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-20 02:18:26.604731 | orchestrator | Friday 20 February 2026 02:18:15 +0000 (0:00:00.684) 0:02:22.590 ******* 2026-02-20 02:18:26.604742 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:18:26.604753 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:18:26.604764 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:18:26.604774 | orchestrator | 2026-02-20 02:18:26.604785 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-20 02:18:26.604799 | orchestrator | Friday 20 February 2026 02:18:16 +0000 (0:00:00.675) 0:02:23.265 ******* 2026-02-20 02:18:26.604810 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:18:26.604821 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:18:26.604831 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:18:26.604842 | orchestrator | 2026-02-20 02:18:26.604852 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-20 02:18:26.604863 | orchestrator | Friday 20 February 2026 02:18:17 +0000 (0:00:00.911) 0:02:24.176 ******* 2026-02-20 02:18:26.604874 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:18:26.604885 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:18:26.604895 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:18:26.604906 | orchestrator | 2026-02-20 02:18:26.604916 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-20 02:18:26.604927 | orchestrator | Friday 20 February 2026 02:18:18 +0000 (0:00:01.131) 0:02:25.308 ******* 2026-02-20 02:18:26.604938 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:18:26.604948 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:18:26.604959 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:18:26.604969 | orchestrator | 2026-02-20 02:18:26.604980 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-20 02:18:26.604991 | orchestrator | Friday 20 February 2026 02:18:18 +0000 (0:00:00.300) 0:02:25.608 ******* 2026-02-20 02:18:26.605001 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:18:26.605012 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:18:26.605023 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:18:26.605034 | orchestrator | 2026-02-20 02:18:26.605045 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-20 02:18:26.605087 | orchestrator | Friday 20 February 2026 02:18:18 +0000 (0:00:00.323) 0:02:25.932 ******* 2026-02-20 02:18:26.605099 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:18:26.605110 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:18:26.605121 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:18:26.605132 | orchestrator | 2026-02-20 02:18:26.605142 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-20 02:18:26.605153 | orchestrator | Friday 20 February 2026 02:18:19 +0000 (0:00:00.798) 0:02:26.730 ******* 2026-02-20 02:18:26.605164 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:18:26.605175 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:18:26.605186 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:18:26.605196 | orchestrator | 2026-02-20 02:18:26.605208 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-20 02:18:26.605220 | orchestrator | Friday 20 February 2026 02:18:20 +0000 (0:00:00.922) 0:02:27.652 ******* 2026-02-20 02:18:26.605231 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-20 02:18:26.605242 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-20 02:18:26.605253 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-20 02:18:26.605264 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-20 02:18:26.605275 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-20 02:18:26.605285 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-20 02:18:26.605296 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-20 02:18:26.605308 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-20 02:18:26.605318 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-20 02:18:26.605329 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-20 02:18:26.605340 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-20 02:18:26.605351 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-20 02:18:26.605362 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-20 02:18:26.605373 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-20 02:18:26.605384 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-20 02:18:26.605395 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-20 02:18:26.605405 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-20 02:18:26.605416 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-20 02:18:26.605427 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-20 02:18:26.605438 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-20 02:18:26.605448 | orchestrator | 2026-02-20 02:18:26.605459 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-20 02:18:26.605470 | orchestrator | 2026-02-20 02:18:26.605672 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-20 02:18:26.605706 | orchestrator | Friday 20 February 2026 02:18:23 +0000 (0:00:03.334) 0:02:30.987 ******* 2026-02-20 02:18:26.605717 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:18:26.605728 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:18:26.605753 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:18:26.605764 | orchestrator | 2026-02-20 02:18:26.605775 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-20 02:18:26.605786 | orchestrator | Friday 20 February 2026 02:18:24 +0000 (0:00:00.354) 0:02:31.342 ******* 2026-02-20 02:18:26.605796 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:18:26.605807 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:18:26.605817 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:18:26.605828 | orchestrator | 2026-02-20 02:18:26.605838 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-20 02:18:26.605849 | orchestrator | Friday 20 February 2026 02:18:25 +0000 (0:00:00.920) 0:02:32.262 ******* 2026-02-20 02:18:26.605859 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:18:26.605870 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:18:26.605880 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:18:26.605891 | orchestrator | 2026-02-20 02:18:26.605902 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-20 02:18:26.605913 | orchestrator | Friday 20 February 2026 02:18:25 +0000 (0:00:00.356) 0:02:32.619 ******* 2026-02-20 02:18:26.605924 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:18:26.605934 | orchestrator | 2026-02-20 02:18:26.605945 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-20 02:18:26.605955 | orchestrator | Friday 20 February 2026 02:18:26 +0000 (0:00:00.532) 0:02:33.152 ******* 2026-02-20 02:18:26.605966 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:18:26.605976 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:18:26.605985 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:18:26.605994 | orchestrator | 2026-02-20 02:18:26.606074 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-20 02:19:28.940290 | orchestrator | Friday 20 February 2026 02:18:26 +0000 (0:00:00.506) 0:02:33.658 ******* 2026-02-20 02:19:28.940496 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:19:28.940517 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:19:28.940530 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:19:28.940541 | orchestrator | 2026-02-20 02:19:28.940554 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-20 02:19:28.940565 | orchestrator | Friday 20 February 2026 02:18:26 +0000 (0:00:00.321) 0:02:33.979 ******* 2026-02-20 02:19:28.940576 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:19:28.940587 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:19:28.940598 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:19:28.940609 | orchestrator | 2026-02-20 02:19:28.940621 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-20 02:19:28.940632 | orchestrator | Friday 20 February 2026 02:18:27 +0000 (0:00:00.320) 0:02:34.299 ******* 2026-02-20 02:19:28.940643 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:19:28.940654 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:19:28.940665 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:19:28.940702 | orchestrator | 2026-02-20 02:19:28.940713 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-20 02:19:28.940724 | orchestrator | Friday 20 February 2026 02:18:27 +0000 (0:00:00.695) 0:02:34.995 ******* 2026-02-20 02:19:28.940735 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:19:28.940746 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:19:28.940757 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:19:28.940768 | orchestrator | 2026-02-20 02:19:28.940779 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-20 02:19:28.940790 | orchestrator | Friday 20 February 2026 02:18:29 +0000 (0:00:01.473) 0:02:36.469 ******* 2026-02-20 02:19:28.940801 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:19:28.940812 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:19:28.940824 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:19:28.940837 | orchestrator | 2026-02-20 02:19:28.940876 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-20 02:19:28.940889 | orchestrator | Friday 20 February 2026 02:18:30 +0000 (0:00:01.356) 0:02:37.825 ******* 2026-02-20 02:19:28.940901 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:19:28.940914 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:19:28.940926 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:19:28.940937 | orchestrator | 2026-02-20 02:19:28.940950 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-20 02:19:28.940962 | orchestrator | 2026-02-20 02:19:28.940975 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-20 02:19:28.940988 | orchestrator | Friday 20 February 2026 02:18:40 +0000 (0:00:10.028) 0:02:47.853 ******* 2026-02-20 02:19:28.940999 | orchestrator | ok: [testbed-manager] 2026-02-20 02:19:28.941010 | orchestrator | 2026-02-20 02:19:28.941021 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-20 02:19:28.941032 | orchestrator | Friday 20 February 2026 02:18:41 +0000 (0:00:00.828) 0:02:48.681 ******* 2026-02-20 02:19:28.941043 | orchestrator | changed: [testbed-manager] 2026-02-20 02:19:28.941054 | orchestrator | 2026-02-20 02:19:28.941065 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-20 02:19:28.941075 | orchestrator | Friday 20 February 2026 02:18:42 +0000 (0:00:00.754) 0:02:49.436 ******* 2026-02-20 02:19:28.941086 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-20 02:19:28.941097 | orchestrator | 2026-02-20 02:19:28.941108 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-20 02:19:28.941119 | orchestrator | Friday 20 February 2026 02:18:42 +0000 (0:00:00.599) 0:02:50.036 ******* 2026-02-20 02:19:28.941130 | orchestrator | changed: [testbed-manager] 2026-02-20 02:19:28.941141 | orchestrator | 2026-02-20 02:19:28.941151 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-20 02:19:28.941162 | orchestrator | Friday 20 February 2026 02:18:43 +0000 (0:00:00.927) 0:02:50.963 ******* 2026-02-20 02:19:28.941173 | orchestrator | changed: [testbed-manager] 2026-02-20 02:19:28.941184 | orchestrator | 2026-02-20 02:19:28.941195 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-20 02:19:28.941206 | orchestrator | Friday 20 February 2026 02:18:44 +0000 (0:00:00.621) 0:02:51.585 ******* 2026-02-20 02:19:28.941216 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-20 02:19:28.941227 | orchestrator | 2026-02-20 02:19:28.941256 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-20 02:19:28.941267 | orchestrator | Friday 20 February 2026 02:18:46 +0000 (0:00:01.756) 0:02:53.341 ******* 2026-02-20 02:19:28.941278 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-20 02:19:28.941289 | orchestrator | 2026-02-20 02:19:28.941300 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-20 02:19:28.941311 | orchestrator | Friday 20 February 2026 02:18:47 +0000 (0:00:00.924) 0:02:54.266 ******* 2026-02-20 02:19:28.941322 | orchestrator | changed: [testbed-manager] 2026-02-20 02:19:28.941333 | orchestrator | 2026-02-20 02:19:28.941344 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-20 02:19:28.941354 | orchestrator | Friday 20 February 2026 02:18:47 +0000 (0:00:00.477) 0:02:54.744 ******* 2026-02-20 02:19:28.941389 | orchestrator | changed: [testbed-manager] 2026-02-20 02:19:28.941401 | orchestrator | 2026-02-20 02:19:28.941413 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-20 02:19:28.941424 | orchestrator | 2026-02-20 02:19:28.941435 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-20 02:19:28.941445 | orchestrator | Friday 20 February 2026 02:18:48 +0000 (0:00:00.501) 0:02:55.245 ******* 2026-02-20 02:19:28.941456 | orchestrator | ok: [testbed-manager] 2026-02-20 02:19:28.941467 | orchestrator | 2026-02-20 02:19:28.941478 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-20 02:19:28.941489 | orchestrator | Friday 20 February 2026 02:18:48 +0000 (0:00:00.360) 0:02:55.605 ******* 2026-02-20 02:19:28.941511 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-20 02:19:28.941523 | orchestrator | 2026-02-20 02:19:28.941553 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-20 02:19:28.941565 | orchestrator | Friday 20 February 2026 02:18:48 +0000 (0:00:00.241) 0:02:55.846 ******* 2026-02-20 02:19:28.941576 | orchestrator | ok: [testbed-manager] 2026-02-20 02:19:28.941586 | orchestrator | 2026-02-20 02:19:28.941597 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-20 02:19:28.941608 | orchestrator | Friday 20 February 2026 02:18:49 +0000 (0:00:00.858) 0:02:56.705 ******* 2026-02-20 02:19:28.941619 | orchestrator | ok: [testbed-manager] 2026-02-20 02:19:28.941630 | orchestrator | 2026-02-20 02:19:28.941641 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-20 02:19:28.941652 | orchestrator | Friday 20 February 2026 02:18:51 +0000 (0:00:01.823) 0:02:58.529 ******* 2026-02-20 02:19:28.941663 | orchestrator | changed: [testbed-manager] 2026-02-20 02:19:28.941674 | orchestrator | 2026-02-20 02:19:28.941685 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-20 02:19:28.941696 | orchestrator | Friday 20 February 2026 02:18:52 +0000 (0:00:00.999) 0:02:59.529 ******* 2026-02-20 02:19:28.941707 | orchestrator | ok: [testbed-manager] 2026-02-20 02:19:28.941717 | orchestrator | 2026-02-20 02:19:28.941728 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-20 02:19:28.941739 | orchestrator | Friday 20 February 2026 02:18:52 +0000 (0:00:00.472) 0:03:00.001 ******* 2026-02-20 02:19:28.941750 | orchestrator | changed: [testbed-manager] 2026-02-20 02:19:28.941761 | orchestrator | 2026-02-20 02:19:28.941772 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-20 02:19:28.941783 | orchestrator | Friday 20 February 2026 02:19:03 +0000 (0:00:10.526) 0:03:10.527 ******* 2026-02-20 02:19:28.941794 | orchestrator | changed: [testbed-manager] 2026-02-20 02:19:28.941804 | orchestrator | 2026-02-20 02:19:28.941815 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-20 02:19:28.941826 | orchestrator | Friday 20 February 2026 02:19:17 +0000 (0:00:13.589) 0:03:24.117 ******* 2026-02-20 02:19:28.941837 | orchestrator | ok: [testbed-manager] 2026-02-20 02:19:28.941848 | orchestrator | 2026-02-20 02:19:28.941859 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-20 02:19:28.941870 | orchestrator | 2026-02-20 02:19:28.941881 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-20 02:19:28.941892 | orchestrator | Friday 20 February 2026 02:19:17 +0000 (0:00:00.807) 0:03:24.925 ******* 2026-02-20 02:19:28.941903 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:19:28.941914 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:19:28.941925 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:19:28.941935 | orchestrator | 2026-02-20 02:19:28.941946 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-20 02:19:28.941957 | orchestrator | Friday 20 February 2026 02:19:18 +0000 (0:00:00.336) 0:03:25.261 ******* 2026-02-20 02:19:28.941968 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:19:28.941979 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:19:28.941990 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:19:28.942001 | orchestrator | 2026-02-20 02:19:28.942012 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-20 02:19:28.942085 | orchestrator | Friday 20 February 2026 02:19:18 +0000 (0:00:00.327) 0:03:25.589 ******* 2026-02-20 02:19:28.942097 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:19:28.942108 | orchestrator | 2026-02-20 02:19:28.942119 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-20 02:19:28.942130 | orchestrator | Friday 20 February 2026 02:19:19 +0000 (0:00:00.757) 0:03:26.347 ******* 2026-02-20 02:19:28.942141 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-20 02:19:28.942161 | orchestrator | 2026-02-20 02:19:28.942172 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-20 02:19:28.942182 | orchestrator | Friday 20 February 2026 02:19:20 +0000 (0:00:00.891) 0:03:27.238 ******* 2026-02-20 02:19:28.942193 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 02:19:28.942204 | orchestrator | 2026-02-20 02:19:28.942215 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-20 02:19:28.942226 | orchestrator | Friday 20 February 2026 02:19:21 +0000 (0:00:00.881) 0:03:28.119 ******* 2026-02-20 02:19:28.942237 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:19:28.942248 | orchestrator | 2026-02-20 02:19:28.942258 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-20 02:19:28.942269 | orchestrator | Friday 20 February 2026 02:19:21 +0000 (0:00:00.119) 0:03:28.239 ******* 2026-02-20 02:19:28.942280 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 02:19:28.942291 | orchestrator | 2026-02-20 02:19:28.942302 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-20 02:19:28.942313 | orchestrator | Friday 20 February 2026 02:19:22 +0000 (0:00:00.985) 0:03:29.225 ******* 2026-02-20 02:19:28.942323 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:19:28.942334 | orchestrator | 2026-02-20 02:19:28.942345 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-20 02:19:28.942356 | orchestrator | Friday 20 February 2026 02:19:22 +0000 (0:00:00.137) 0:03:29.362 ******* 2026-02-20 02:19:28.942387 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:19:28.942398 | orchestrator | 2026-02-20 02:19:28.942409 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-20 02:19:28.942419 | orchestrator | Friday 20 February 2026 02:19:22 +0000 (0:00:00.125) 0:03:29.487 ******* 2026-02-20 02:19:28.942437 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:19:28.942448 | orchestrator | 2026-02-20 02:19:28.942459 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-20 02:19:28.942470 | orchestrator | Friday 20 February 2026 02:19:22 +0000 (0:00:00.128) 0:03:29.616 ******* 2026-02-20 02:19:28.942481 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:19:28.942492 | orchestrator | 2026-02-20 02:19:28.942503 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-20 02:19:28.942514 | orchestrator | Friday 20 February 2026 02:19:22 +0000 (0:00:00.121) 0:03:29.738 ******* 2026-02-20 02:19:28.942533 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-20 02:20:37.283422 | orchestrator | 2026-02-20 02:20:37.283520 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-20 02:20:37.283534 | orchestrator | Friday 20 February 2026 02:19:28 +0000 (0:00:06.264) 0:03:36.002 ******* 2026-02-20 02:20:37.283542 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-20 02:20:37.283551 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-20 02:20:37.283561 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-20 02:20:37.283569 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-20 02:20:37.283577 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-20 02:20:37.283585 | orchestrator | 2026-02-20 02:20:37.283594 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-20 02:20:37.283602 | orchestrator | Friday 20 February 2026 02:20:11 +0000 (0:00:42.745) 0:04:18.747 ******* 2026-02-20 02:20:37.283610 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 02:20:37.283619 | orchestrator | 2026-02-20 02:20:37.283626 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-20 02:20:37.283635 | orchestrator | Friday 20 February 2026 02:20:12 +0000 (0:00:01.320) 0:04:20.068 ******* 2026-02-20 02:20:37.283642 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-20 02:20:37.283650 | orchestrator | 2026-02-20 02:20:37.283681 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-20 02:20:37.283690 | orchestrator | Friday 20 February 2026 02:20:14 +0000 (0:00:01.670) 0:04:21.739 ******* 2026-02-20 02:20:37.283698 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-20 02:20:37.283706 | orchestrator | 2026-02-20 02:20:37.283714 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-20 02:20:37.283722 | orchestrator | Friday 20 February 2026 02:20:16 +0000 (0:00:01.459) 0:04:23.198 ******* 2026-02-20 02:20:37.283730 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:20:37.283737 | orchestrator | 2026-02-20 02:20:37.283746 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-20 02:20:37.283754 | orchestrator | Friday 20 February 2026 02:20:16 +0000 (0:00:00.138) 0:04:23.337 ******* 2026-02-20 02:20:37.283761 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-20 02:20:37.283770 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-20 02:20:37.283778 | orchestrator | 2026-02-20 02:20:37.283785 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-20 02:20:37.283793 | orchestrator | Friday 20 February 2026 02:20:18 +0000 (0:00:01.984) 0:04:25.321 ******* 2026-02-20 02:20:37.283801 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:20:37.283809 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:20:37.283817 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:20:37.283825 | orchestrator | 2026-02-20 02:20:37.283833 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-20 02:20:37.283841 | orchestrator | Friday 20 February 2026 02:20:18 +0000 (0:00:00.370) 0:04:25.692 ******* 2026-02-20 02:20:37.283848 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:20:37.283856 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:20:37.283864 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:20:37.283872 | orchestrator | 2026-02-20 02:20:37.283880 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-20 02:20:37.283888 | orchestrator | 2026-02-20 02:20:37.283901 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-20 02:20:37.283915 | orchestrator | Friday 20 February 2026 02:20:19 +0000 (0:00:00.930) 0:04:26.623 ******* 2026-02-20 02:20:37.283928 | orchestrator | ok: [testbed-manager] 2026-02-20 02:20:37.283941 | orchestrator | 2026-02-20 02:20:37.283954 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-20 02:20:37.283968 | orchestrator | Friday 20 February 2026 02:20:19 +0000 (0:00:00.441) 0:04:27.065 ******* 2026-02-20 02:20:37.283982 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-20 02:20:37.283994 | orchestrator | 2026-02-20 02:20:37.284007 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-20 02:20:37.284021 | orchestrator | Friday 20 February 2026 02:20:20 +0000 (0:00:00.249) 0:04:27.314 ******* 2026-02-20 02:20:37.284036 | orchestrator | changed: [testbed-manager] 2026-02-20 02:20:37.284050 | orchestrator | 2026-02-20 02:20:37.284063 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-20 02:20:37.284076 | orchestrator | 2026-02-20 02:20:37.284087 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-20 02:20:37.284099 | orchestrator | Friday 20 February 2026 02:20:26 +0000 (0:00:05.850) 0:04:33.165 ******* 2026-02-20 02:20:37.284114 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:20:37.284127 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:20:37.284140 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:20:37.284154 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:20:37.284167 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:20:37.284181 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:20:37.284194 | orchestrator | 2026-02-20 02:20:37.284207 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-20 02:20:37.284221 | orchestrator | Friday 20 February 2026 02:20:26 +0000 (0:00:00.650) 0:04:33.816 ******* 2026-02-20 02:20:37.284246 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-20 02:20:37.284286 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-20 02:20:37.284300 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-20 02:20:37.284312 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-20 02:20:37.284346 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-20 02:20:37.284355 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-20 02:20:37.284363 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-20 02:20:37.284371 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-20 02:20:37.284380 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-20 02:20:37.284388 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-20 02:20:37.284395 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-20 02:20:37.284404 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-20 02:20:37.284412 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-20 02:20:37.284419 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-20 02:20:37.284446 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-20 02:20:37.284455 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-20 02:20:37.284462 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-20 02:20:37.284470 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-20 02:20:37.284478 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-20 02:20:37.284486 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-20 02:20:37.284493 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-20 02:20:37.284501 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-20 02:20:37.284509 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-20 02:20:37.284517 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-20 02:20:37.284525 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-20 02:20:37.284532 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-20 02:20:37.284540 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-20 02:20:37.284548 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-20 02:20:37.284556 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-20 02:20:37.284564 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-20 02:20:37.284572 | orchestrator | 2026-02-20 02:20:37.284580 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-20 02:20:37.284588 | orchestrator | Friday 20 February 2026 02:20:36 +0000 (0:00:09.266) 0:04:43.083 ******* 2026-02-20 02:20:37.284596 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:20:37.284603 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:20:37.284611 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:20:37.284626 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:20:37.284634 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:20:37.284642 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:20:37.284650 | orchestrator | 2026-02-20 02:20:37.284663 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-20 02:20:37.284671 | orchestrator | Friday 20 February 2026 02:20:36 +0000 (0:00:00.534) 0:04:43.618 ******* 2026-02-20 02:20:37.284678 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:20:37.284686 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:20:37.284694 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:20:37.284701 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:20:37.284709 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:20:37.284717 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:20:37.284724 | orchestrator | 2026-02-20 02:20:37.284732 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:20:37.284740 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:20:37.284751 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-20 02:20:37.284759 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-20 02:20:37.284767 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-20 02:20:37.284775 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-20 02:20:37.284788 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-20 02:20:37.719962 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-20 02:20:37.720064 | orchestrator | 2026-02-20 02:20:37.720080 | orchestrator | 2026-02-20 02:20:37.720092 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:20:37.720105 | orchestrator | Friday 20 February 2026 02:20:37 +0000 (0:00:00.723) 0:04:44.341 ******* 2026-02-20 02:20:37.720117 | orchestrator | =============================================================================== 2026-02-20 02:20:37.720128 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 64.38s 2026-02-20 02:20:37.720140 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.75s 2026-02-20 02:20:37.720150 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.23s 2026-02-20 02:20:37.720161 | orchestrator | kubectl : Install required packages ------------------------------------ 13.59s 2026-02-20 02:20:37.720172 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 10.53s 2026-02-20 02:20:37.720182 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.03s 2026-02-20 02:20:37.720193 | orchestrator | Manage labels ----------------------------------------------------------- 9.27s 2026-02-20 02:20:37.720204 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.26s 2026-02-20 02:20:37.720214 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.03s 2026-02-20 02:20:37.720243 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.85s 2026-02-20 02:20:37.720285 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.33s 2026-02-20 02:20:37.720308 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.61s 2026-02-20 02:20:37.720345 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.45s 2026-02-20 02:20:37.720357 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.98s 2026-02-20 02:20:37.720367 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.85s 2026-02-20 02:20:37.720378 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.82s 2026-02-20 02:20:37.720389 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.76s 2026-02-20 02:20:37.720399 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.67s 2026-02-20 02:20:37.720410 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.62s 2026-02-20 02:20:37.720420 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.57s 2026-02-20 02:20:38.109579 | orchestrator | + osism apply copy-kubeconfig 2026-02-20 02:20:50.307294 | orchestrator | 2026-02-20 02:20:50 | INFO  | Task c4ff647f-a2cb-4f6e-8e62-0552bc4ab251 (copy-kubeconfig) was prepared for execution. 2026-02-20 02:20:50.307367 | orchestrator | 2026-02-20 02:20:50 | INFO  | It takes a moment until task c4ff647f-a2cb-4f6e-8e62-0552bc4ab251 (copy-kubeconfig) has been started and output is visible here. 2026-02-20 02:20:57.741038 | orchestrator | 2026-02-20 02:20:57.741171 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-20 02:20:57.741191 | orchestrator | 2026-02-20 02:20:57.741203 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-20 02:20:57.741214 | orchestrator | Friday 20 February 2026 02:20:54 +0000 (0:00:00.161) 0:00:00.161 ******* 2026-02-20 02:20:57.741259 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-20 02:20:57.741270 | orchestrator | 2026-02-20 02:20:57.741282 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-20 02:20:57.741317 | orchestrator | Friday 20 February 2026 02:20:55 +0000 (0:00:00.818) 0:00:00.979 ******* 2026-02-20 02:20:57.741330 | orchestrator | changed: [testbed-manager] 2026-02-20 02:20:57.741341 | orchestrator | 2026-02-20 02:20:57.741352 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-20 02:20:57.741367 | orchestrator | Friday 20 February 2026 02:20:56 +0000 (0:00:01.262) 0:00:02.242 ******* 2026-02-20 02:20:57.741378 | orchestrator | changed: [testbed-manager] 2026-02-20 02:20:57.741389 | orchestrator | 2026-02-20 02:20:57.741400 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:20:57.741411 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:20:57.741422 | orchestrator | 2026-02-20 02:20:57.741433 | orchestrator | 2026-02-20 02:20:57.741444 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:20:57.741454 | orchestrator | Friday 20 February 2026 02:20:57 +0000 (0:00:00.503) 0:00:02.746 ******* 2026-02-20 02:20:57.741465 | orchestrator | =============================================================================== 2026-02-20 02:20:57.741476 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.26s 2026-02-20 02:20:57.741486 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.82s 2026-02-20 02:20:57.741497 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.50s 2026-02-20 02:20:58.090137 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-02-20 02:21:10.462533 | orchestrator | 2026-02-20 02:21:10 | INFO  | Task 905ba0fa-022d-4067-a543-91801b183697 (openstackclient) was prepared for execution. 2026-02-20 02:21:10.462624 | orchestrator | 2026-02-20 02:21:10 | INFO  | It takes a moment until task 905ba0fa-022d-4067-a543-91801b183697 (openstackclient) has been started and output is visible here. 2026-02-20 02:22:00.473225 | orchestrator | 2026-02-20 02:22:00.473331 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-20 02:22:00.473369 | orchestrator | 2026-02-20 02:22:00.473379 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-20 02:22:00.473387 | orchestrator | Friday 20 February 2026 02:21:15 +0000 (0:00:00.270) 0:00:00.270 ******* 2026-02-20 02:22:00.473399 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-20 02:22:00.473418 | orchestrator | 2026-02-20 02:22:00.473435 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-20 02:22:00.473446 | orchestrator | Friday 20 February 2026 02:21:15 +0000 (0:00:00.234) 0:00:00.505 ******* 2026-02-20 02:22:00.473457 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-20 02:22:00.473471 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-20 02:22:00.473483 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-20 02:22:00.473495 | orchestrator | 2026-02-20 02:22:00.473508 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-20 02:22:00.473521 | orchestrator | Friday 20 February 2026 02:21:16 +0000 (0:00:01.362) 0:00:01.868 ******* 2026-02-20 02:22:00.473535 | orchestrator | changed: [testbed-manager] 2026-02-20 02:22:00.473544 | orchestrator | 2026-02-20 02:22:00.473551 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-20 02:22:00.473559 | orchestrator | Friday 20 February 2026 02:21:18 +0000 (0:00:01.491) 0:00:03.359 ******* 2026-02-20 02:22:00.473568 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-20 02:22:00.473576 | orchestrator | ok: [testbed-manager] 2026-02-20 02:22:00.473585 | orchestrator | 2026-02-20 02:22:00.473593 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-20 02:22:00.473601 | orchestrator | Friday 20 February 2026 02:21:54 +0000 (0:00:36.152) 0:00:39.512 ******* 2026-02-20 02:22:00.473609 | orchestrator | changed: [testbed-manager] 2026-02-20 02:22:00.473616 | orchestrator | 2026-02-20 02:22:00.473624 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-20 02:22:00.473632 | orchestrator | Friday 20 February 2026 02:21:55 +0000 (0:00:01.077) 0:00:40.589 ******* 2026-02-20 02:22:00.473639 | orchestrator | ok: [testbed-manager] 2026-02-20 02:22:00.473647 | orchestrator | 2026-02-20 02:22:00.473655 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-20 02:22:00.473663 | orchestrator | Friday 20 February 2026 02:21:56 +0000 (0:00:00.856) 0:00:41.446 ******* 2026-02-20 02:22:00.473671 | orchestrator | changed: [testbed-manager] 2026-02-20 02:22:00.473679 | orchestrator | 2026-02-20 02:22:00.473687 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-20 02:22:00.473694 | orchestrator | Friday 20 February 2026 02:21:58 +0000 (0:00:01.557) 0:00:43.004 ******* 2026-02-20 02:22:00.473702 | orchestrator | changed: [testbed-manager] 2026-02-20 02:22:00.473710 | orchestrator | 2026-02-20 02:22:00.473718 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-20 02:22:00.473726 | orchestrator | Friday 20 February 2026 02:21:58 +0000 (0:00:00.827) 0:00:43.832 ******* 2026-02-20 02:22:00.473733 | orchestrator | changed: [testbed-manager] 2026-02-20 02:22:00.473741 | orchestrator | 2026-02-20 02:22:00.473749 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-20 02:22:00.473756 | orchestrator | Friday 20 February 2026 02:21:59 +0000 (0:00:00.633) 0:00:44.465 ******* 2026-02-20 02:22:00.473764 | orchestrator | ok: [testbed-manager] 2026-02-20 02:22:00.473772 | orchestrator | 2026-02-20 02:22:00.473780 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:22:00.473788 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:22:00.473797 | orchestrator | 2026-02-20 02:22:00.473805 | orchestrator | 2026-02-20 02:22:00.473821 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:22:00.473829 | orchestrator | Friday 20 February 2026 02:21:59 +0000 (0:00:00.453) 0:00:44.919 ******* 2026-02-20 02:22:00.473837 | orchestrator | =============================================================================== 2026-02-20 02:22:00.473845 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.15s 2026-02-20 02:22:00.473853 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.56s 2026-02-20 02:22:00.473860 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.49s 2026-02-20 02:22:00.473868 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.36s 2026-02-20 02:22:00.473876 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.08s 2026-02-20 02:22:00.473884 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.86s 2026-02-20 02:22:00.473891 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.83s 2026-02-20 02:22:00.473899 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.63s 2026-02-20 02:22:00.473907 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.45s 2026-02-20 02:22:00.473915 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.23s 2026-02-20 02:22:03.186945 | orchestrator | 2026-02-20 02:22:03 | INFO  | Task 120b367f-6443-4cc8-b328-6c1b332e6028 (common) was prepared for execution. 2026-02-20 02:22:03.187069 | orchestrator | 2026-02-20 02:22:03 | INFO  | It takes a moment until task 120b367f-6443-4cc8-b328-6c1b332e6028 (common) has been started and output is visible here. 2026-02-20 02:22:16.815926 | orchestrator | 2026-02-20 02:22:16.816018 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-20 02:22:16.816031 | orchestrator | 2026-02-20 02:22:16.816041 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-20 02:22:16.816049 | orchestrator | Friday 20 February 2026 02:22:07 +0000 (0:00:00.313) 0:00:00.313 ******* 2026-02-20 02:22:16.816058 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:22:16.816067 | orchestrator | 2026-02-20 02:22:16.816075 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-20 02:22:16.816083 | orchestrator | Friday 20 February 2026 02:22:09 +0000 (0:00:01.388) 0:00:01.702 ******* 2026-02-20 02:22:16.816091 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 02:22:16.816099 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 02:22:16.816142 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 02:22:16.816150 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 02:22:16.816158 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 02:22:16.816165 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 02:22:16.816173 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 02:22:16.816181 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 02:22:16.816188 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 02:22:16.816214 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 02:22:16.816223 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 02:22:16.816231 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 02:22:16.816239 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 02:22:16.816269 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 02:22:16.816277 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 02:22:16.816286 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 02:22:16.816305 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 02:22:16.816314 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 02:22:16.816322 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 02:22:16.816330 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 02:22:16.816338 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 02:22:16.816346 | orchestrator | 2026-02-20 02:22:16.816359 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-20 02:22:16.816367 | orchestrator | Friday 20 February 2026 02:22:12 +0000 (0:00:02.887) 0:00:04.589 ******* 2026-02-20 02:22:16.816375 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:22:16.816384 | orchestrator | 2026-02-20 02:22:16.816392 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-20 02:22:16.816400 | orchestrator | Friday 20 February 2026 02:22:14 +0000 (0:00:01.859) 0:00:06.448 ******* 2026-02-20 02:22:16.816411 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:16.816427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:16.816452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:16.816463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:16.816473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:16.816489 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:16.816498 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:16.816508 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:16.816517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:16.816533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:18.056705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:18.056843 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:18.056859 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:18.056872 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:18.056900 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:18.056913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:18.056924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:18.056963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:18.056976 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:18.056995 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:18.057006 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:18.057018 | orchestrator | 2026-02-20 02:22:18.057031 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-20 02:22:18.057043 | orchestrator | Friday 20 February 2026 02:22:17 +0000 (0:00:03.702) 0:00:10.151 ******* 2026-02-20 02:22:18.057057 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 02:22:18.057070 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:18.057084 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:18.057097 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:22:18.057141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 02:22:18.057170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:18.645977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:18.646147 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:22:18.646199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 02:22:18.646209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:18.646225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:18.646232 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:22:18.646238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 02:22:18.646245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:18.646251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:18.646272 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:22:18.646293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 02:22:18.646299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:18.646305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:18.646311 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:22:18.646317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 02:22:18.646328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:18.646334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:18.646340 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:22:18.646346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 02:22:18.646362 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:19.485964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:19.486157 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:22:19.486176 | orchestrator | 2026-02-20 02:22:19.486187 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-20 02:22:19.486197 | orchestrator | Friday 20 February 2026 02:22:18 +0000 (0:00:00.870) 0:00:11.021 ******* 2026-02-20 02:22:19.486207 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 02:22:19.486218 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:19.486228 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:19.486253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 02:22:19.486262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:19.486288 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:22:19.486297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:19.486328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 02:22:19.486337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:19.486346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:19.486354 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:22:19.486362 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:22:19.486375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 02:22:19.486383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:19.486397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:19.486406 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:22:19.486414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 02:22:19.486438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:24.550071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:24.550190 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:22:24.550206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 02:22:24.550218 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:24.550228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:24.550239 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:22:24.550278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 02:22:24.550288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:24.550297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:24.550306 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:22:24.550315 | orchestrator | 2026-02-20 02:22:24.550325 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-20 02:22:24.550336 | orchestrator | Friday 20 February 2026 02:22:20 +0000 (0:00:01.981) 0:00:13.002 ******* 2026-02-20 02:22:24.550344 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:22:24.550352 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:22:24.550361 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:22:24.550370 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:22:24.550394 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:22:24.550404 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:22:24.550412 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:22:24.550421 | orchestrator | 2026-02-20 02:22:24.550431 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-20 02:22:24.550439 | orchestrator | Friday 20 February 2026 02:22:21 +0000 (0:00:00.688) 0:00:13.691 ******* 2026-02-20 02:22:24.550448 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:22:24.550457 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:22:24.550466 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:22:24.550475 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:22:24.550483 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:22:24.550491 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:22:24.550499 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:22:24.550508 | orchestrator | 2026-02-20 02:22:24.550516 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-20 02:22:24.550526 | orchestrator | Friday 20 February 2026 02:22:22 +0000 (0:00:00.814) 0:00:14.506 ******* 2026-02-20 02:22:24.550536 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:24.550562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:24.550591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:24.550600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:24.550609 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:24.550617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:24.550640 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:27.349841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:27.349953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:27.350004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:27.350013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:27.350076 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:27.350121 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:27.350157 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:27.350171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:27.350194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:27.350213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:27.350226 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:27.350239 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:27.350252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:27.350265 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:27.350278 | orchestrator | 2026-02-20 02:22:27.350292 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-20 02:22:27.350306 | orchestrator | Friday 20 February 2026 02:22:25 +0000 (0:00:03.391) 0:00:17.897 ******* 2026-02-20 02:22:27.350320 | orchestrator | [WARNING]: Skipped 2026-02-20 02:22:27.350333 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-20 02:22:27.350346 | orchestrator | to this access issue: 2026-02-20 02:22:27.350356 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-20 02:22:27.350363 | orchestrator | directory 2026-02-20 02:22:27.350372 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 02:22:27.350381 | orchestrator | 2026-02-20 02:22:27.350390 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-20 02:22:27.350398 | orchestrator | Friday 20 February 2026 02:22:26 +0000 (0:00:00.991) 0:00:18.889 ******* 2026-02-20 02:22:27.350407 | orchestrator | [WARNING]: Skipped 2026-02-20 02:22:27.350415 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-20 02:22:27.350430 | orchestrator | to this access issue: 2026-02-20 02:22:27.350445 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-20 02:22:37.759338 | orchestrator | directory 2026-02-20 02:22:37.759453 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 02:22:37.759469 | orchestrator | 2026-02-20 02:22:37.759482 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-20 02:22:37.759494 | orchestrator | Friday 20 February 2026 02:22:27 +0000 (0:00:01.052) 0:00:19.941 ******* 2026-02-20 02:22:37.759505 | orchestrator | [WARNING]: Skipped 2026-02-20 02:22:37.759517 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-20 02:22:37.759529 | orchestrator | to this access issue: 2026-02-20 02:22:37.759540 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-20 02:22:37.759551 | orchestrator | directory 2026-02-20 02:22:37.759562 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 02:22:37.759573 | orchestrator | 2026-02-20 02:22:37.759584 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-20 02:22:37.759595 | orchestrator | Friday 20 February 2026 02:22:28 +0000 (0:00:00.850) 0:00:20.792 ******* 2026-02-20 02:22:37.759606 | orchestrator | [WARNING]: Skipped 2026-02-20 02:22:37.759617 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-20 02:22:37.759627 | orchestrator | to this access issue: 2026-02-20 02:22:37.759638 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-20 02:22:37.759649 | orchestrator | directory 2026-02-20 02:22:37.759660 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 02:22:37.759677 | orchestrator | 2026-02-20 02:22:37.759695 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-20 02:22:37.759714 | orchestrator | Friday 20 February 2026 02:22:29 +0000 (0:00:00.901) 0:00:21.694 ******* 2026-02-20 02:22:37.759731 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:22:37.759748 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:22:37.759766 | orchestrator | changed: [testbed-manager] 2026-02-20 02:22:37.759784 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:22:37.759801 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:22:37.759817 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:22:37.759835 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:22:37.759854 | orchestrator | 2026-02-20 02:22:37.759900 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-20 02:22:37.759921 | orchestrator | Friday 20 February 2026 02:22:32 +0000 (0:00:02.709) 0:00:24.403 ******* 2026-02-20 02:22:37.759940 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 02:22:37.759962 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 02:22:37.759981 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 02:22:37.760000 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 02:22:37.760019 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 02:22:37.760039 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 02:22:37.760060 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 02:22:37.760156 | orchestrator | 2026-02-20 02:22:37.760182 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-20 02:22:37.760201 | orchestrator | Friday 20 February 2026 02:22:34 +0000 (0:00:02.552) 0:00:26.956 ******* 2026-02-20 02:22:37.760223 | orchestrator | changed: [testbed-manager] 2026-02-20 02:22:37.760241 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:22:37.760290 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:22:37.760310 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:22:37.760327 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:22:37.760343 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:22:37.760360 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:22:37.760379 | orchestrator | 2026-02-20 02:22:37.760396 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-20 02:22:37.760413 | orchestrator | Friday 20 February 2026 02:22:36 +0000 (0:00:02.028) 0:00:28.985 ******* 2026-02-20 02:22:37.760436 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:37.760487 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:37.760510 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:37.760529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:37.760560 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:37.760580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:37.760617 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:37.760638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:37.760658 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:37.760707 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:44.557595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:44.557699 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:44.557708 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:44.557732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:44.557737 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:44.557741 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:44.557745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:22:44.557767 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:44.557772 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:44.557777 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:44.557781 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:44.557790 | orchestrator | 2026-02-20 02:22:44.557795 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-20 02:22:44.557800 | orchestrator | Friday 20 February 2026 02:22:38 +0000 (0:00:01.672) 0:00:30.658 ******* 2026-02-20 02:22:44.557804 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 02:22:44.557809 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 02:22:44.557813 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 02:22:44.557817 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 02:22:44.557821 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 02:22:44.557824 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 02:22:44.557828 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 02:22:44.557832 | orchestrator | 2026-02-20 02:22:44.557836 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-20 02:22:44.557839 | orchestrator | Friday 20 February 2026 02:22:40 +0000 (0:00:02.120) 0:00:32.778 ******* 2026-02-20 02:22:44.557843 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 02:22:44.557848 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 02:22:44.557852 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 02:22:44.557855 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 02:22:44.557864 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 02:22:44.557868 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 02:22:44.557872 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 02:22:44.557875 | orchestrator | 2026-02-20 02:22:44.557879 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-20 02:22:44.557883 | orchestrator | Friday 20 February 2026 02:22:42 +0000 (0:00:01.911) 0:00:34.689 ******* 2026-02-20 02:22:44.557887 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:44.557896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:45.211220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:45.211363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:45.211378 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:45.211389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:45.211399 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 02:22:45.211409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:45.211420 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:45.211448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:45.211473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:45.211492 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:45.211508 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:45.211519 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:45.211530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:45.211542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:22:45.211560 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:23:56.068604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:23:56.068719 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:23:56.068733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:23:56.068743 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:23:56.068752 | orchestrator | 2026-02-20 02:23:56.068763 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-20 02:23:56.068774 | orchestrator | Friday 20 February 2026 02:22:45 +0000 (0:00:02.896) 0:00:37.586 ******* 2026-02-20 02:23:56.068783 | orchestrator | changed: [testbed-manager] 2026-02-20 02:23:56.068794 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:23:56.068803 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:23:56.068812 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:23:56.068821 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:23:56.068830 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:23:56.068839 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:23:56.068848 | orchestrator | 2026-02-20 02:23:56.068857 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-20 02:23:56.068866 | orchestrator | Friday 20 February 2026 02:22:46 +0000 (0:00:01.488) 0:00:39.074 ******* 2026-02-20 02:23:56.068875 | orchestrator | changed: [testbed-manager] 2026-02-20 02:23:56.068884 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:23:56.068892 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:23:56.068901 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:23:56.068910 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:23:56.068918 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:23:56.068927 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:23:56.068935 | orchestrator | 2026-02-20 02:23:56.068944 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 02:23:56.068960 | orchestrator | Friday 20 February 2026 02:22:47 +0000 (0:00:01.159) 0:00:40.234 ******* 2026-02-20 02:23:56.068976 | orchestrator | 2026-02-20 02:23:56.069067 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 02:23:56.069083 | orchestrator | Friday 20 February 2026 02:22:47 +0000 (0:00:00.108) 0:00:40.343 ******* 2026-02-20 02:23:56.069097 | orchestrator | 2026-02-20 02:23:56.069111 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 02:23:56.069152 | orchestrator | Friday 20 February 2026 02:22:48 +0000 (0:00:00.075) 0:00:40.419 ******* 2026-02-20 02:23:56.069167 | orchestrator | 2026-02-20 02:23:56.069183 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 02:23:56.069198 | orchestrator | Friday 20 February 2026 02:22:48 +0000 (0:00:00.072) 0:00:40.491 ******* 2026-02-20 02:23:56.069215 | orchestrator | 2026-02-20 02:23:56.069230 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 02:23:56.069244 | orchestrator | Friday 20 February 2026 02:22:48 +0000 (0:00:00.262) 0:00:40.754 ******* 2026-02-20 02:23:56.069262 | orchestrator | 2026-02-20 02:23:56.069283 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 02:23:56.069297 | orchestrator | Friday 20 February 2026 02:22:48 +0000 (0:00:00.066) 0:00:40.820 ******* 2026-02-20 02:23:56.069311 | orchestrator | 2026-02-20 02:23:56.069325 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 02:23:56.069338 | orchestrator | Friday 20 February 2026 02:22:48 +0000 (0:00:00.062) 0:00:40.883 ******* 2026-02-20 02:23:56.069353 | orchestrator | 2026-02-20 02:23:56.069366 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-20 02:23:56.069380 | orchestrator | Friday 20 February 2026 02:22:48 +0000 (0:00:00.096) 0:00:40.980 ******* 2026-02-20 02:23:56.069396 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:23:56.069410 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:23:56.069424 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:23:56.069438 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:23:56.069452 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:23:56.069491 | orchestrator | changed: [testbed-manager] 2026-02-20 02:23:56.069507 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:23:56.069522 | orchestrator | 2026-02-20 02:23:56.069537 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-20 02:23:56.069551 | orchestrator | Friday 20 February 2026 02:23:18 +0000 (0:00:29.459) 0:01:10.439 ******* 2026-02-20 02:23:56.069566 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:23:56.069580 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:23:56.069595 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:23:56.069609 | orchestrator | changed: [testbed-manager] 2026-02-20 02:23:56.069624 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:23:56.069639 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:23:56.069655 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:23:56.069670 | orchestrator | 2026-02-20 02:23:56.069684 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-20 02:23:56.069700 | orchestrator | Friday 20 February 2026 02:23:45 +0000 (0:00:27.632) 0:01:38.072 ******* 2026-02-20 02:23:56.069715 | orchestrator | ok: [testbed-manager] 2026-02-20 02:23:56.069730 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:23:56.069746 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:23:56.069760 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:23:56.069775 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:23:56.069790 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:23:56.069805 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:23:56.069820 | orchestrator | 2026-02-20 02:23:56.069835 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-20 02:23:56.069849 | orchestrator | Friday 20 February 2026 02:23:47 +0000 (0:00:02.045) 0:01:40.118 ******* 2026-02-20 02:23:56.069862 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:23:56.069871 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:23:56.069880 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:23:56.069888 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:23:56.069897 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:23:56.069906 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:23:56.069914 | orchestrator | changed: [testbed-manager] 2026-02-20 02:23:56.069923 | orchestrator | 2026-02-20 02:23:56.069931 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:23:56.069953 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-20 02:23:56.069964 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-20 02:23:56.069972 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-20 02:23:56.070079 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-20 02:23:56.070094 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-20 02:23:56.070104 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-20 02:23:56.070113 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-20 02:23:56.070121 | orchestrator | 2026-02-20 02:23:56.070130 | orchestrator | 2026-02-20 02:23:56.070139 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:23:56.070148 | orchestrator | Friday 20 February 2026 02:23:56 +0000 (0:00:08.304) 0:01:48.423 ******* 2026-02-20 02:23:56.070157 | orchestrator | =============================================================================== 2026-02-20 02:23:56.070166 | orchestrator | common : Restart fluentd container ------------------------------------- 29.46s 2026-02-20 02:23:56.070175 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 27.63s 2026-02-20 02:23:56.070183 | orchestrator | common : Restart cron container ----------------------------------------- 8.31s 2026-02-20 02:23:56.070192 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.70s 2026-02-20 02:23:56.070201 | orchestrator | common : Copying over config.json files for services -------------------- 3.39s 2026-02-20 02:23:56.070210 | orchestrator | common : Check common containers ---------------------------------------- 2.90s 2026-02-20 02:23:56.070218 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.89s 2026-02-20 02:23:56.070227 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.71s 2026-02-20 02:23:56.070236 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.55s 2026-02-20 02:23:56.070244 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.12s 2026-02-20 02:23:56.070253 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.05s 2026-02-20 02:23:56.070262 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.03s 2026-02-20 02:23:56.070270 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.98s 2026-02-20 02:23:56.070279 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.91s 2026-02-20 02:23:56.070288 | orchestrator | common : include_tasks -------------------------------------------------- 1.86s 2026-02-20 02:23:56.070296 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.67s 2026-02-20 02:23:56.070315 | orchestrator | common : Creating log volume -------------------------------------------- 1.49s 2026-02-20 02:23:56.526848 | orchestrator | common : include_tasks -------------------------------------------------- 1.39s 2026-02-20 02:23:56.526926 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.16s 2026-02-20 02:23:56.526935 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.05s 2026-02-20 02:23:59.079535 | orchestrator | 2026-02-20 02:23:59 | INFO  | Task 3f777c9e-4a1e-48ca-be22-891bfc495c1f (loadbalancer) was prepared for execution. 2026-02-20 02:23:59.079670 | orchestrator | 2026-02-20 02:23:59 | INFO  | It takes a moment until task 3f777c9e-4a1e-48ca-be22-891bfc495c1f (loadbalancer) has been started and output is visible here. 2026-02-20 02:24:15.203561 | orchestrator | 2026-02-20 02:24:15.203639 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 02:24:15.203646 | orchestrator | 2026-02-20 02:24:15.203651 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 02:24:15.203655 | orchestrator | Friday 20 February 2026 02:24:03 +0000 (0:00:00.291) 0:00:00.291 ******* 2026-02-20 02:24:15.203659 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:24:15.203664 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:24:15.203669 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:24:15.203675 | orchestrator | 2026-02-20 02:24:15.203681 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 02:24:15.203687 | orchestrator | Friday 20 February 2026 02:24:04 +0000 (0:00:00.396) 0:00:00.687 ******* 2026-02-20 02:24:15.203694 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-20 02:24:15.203701 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-20 02:24:15.203707 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-20 02:24:15.203713 | orchestrator | 2026-02-20 02:24:15.203719 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-20 02:24:15.203723 | orchestrator | 2026-02-20 02:24:15.203727 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-20 02:24:15.203731 | orchestrator | Friday 20 February 2026 02:24:04 +0000 (0:00:00.520) 0:00:01.207 ******* 2026-02-20 02:24:15.203736 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:24:15.203740 | orchestrator | 2026-02-20 02:24:15.203744 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-20 02:24:15.203749 | orchestrator | Friday 20 February 2026 02:24:05 +0000 (0:00:00.637) 0:00:01.845 ******* 2026-02-20 02:24:15.203755 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:24:15.203762 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:24:15.203768 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:24:15.203774 | orchestrator | 2026-02-20 02:24:15.203781 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-20 02:24:15.203787 | orchestrator | Friday 20 February 2026 02:24:05 +0000 (0:00:00.607) 0:00:02.452 ******* 2026-02-20 02:24:15.203794 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:24:15.203798 | orchestrator | 2026-02-20 02:24:15.203802 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-20 02:24:15.203806 | orchestrator | Friday 20 February 2026 02:24:06 +0000 (0:00:00.778) 0:00:03.231 ******* 2026-02-20 02:24:15.203810 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:24:15.203814 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:24:15.203829 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:24:15.203834 | orchestrator | 2026-02-20 02:24:15.203837 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-20 02:24:15.203847 | orchestrator | Friday 20 February 2026 02:24:07 +0000 (0:00:00.658) 0:00:03.889 ******* 2026-02-20 02:24:15.203851 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-20 02:24:15.203856 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-20 02:24:15.203860 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-20 02:24:15.203864 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-20 02:24:15.203868 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-20 02:24:15.203873 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-20 02:24:15.203899 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-20 02:24:15.203906 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-20 02:24:15.203913 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-20 02:24:15.203919 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-20 02:24:15.203925 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-20 02:24:15.203929 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-20 02:24:15.203932 | orchestrator | 2026-02-20 02:24:15.203936 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-20 02:24:15.203940 | orchestrator | Friday 20 February 2026 02:24:10 +0000 (0:00:03.248) 0:00:07.138 ******* 2026-02-20 02:24:15.203944 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-20 02:24:15.203949 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-20 02:24:15.203953 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-20 02:24:15.203958 | orchestrator | 2026-02-20 02:24:15.204019 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-20 02:24:15.204028 | orchestrator | Friday 20 February 2026 02:24:11 +0000 (0:00:00.795) 0:00:07.934 ******* 2026-02-20 02:24:15.204032 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-20 02:24:15.204036 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-20 02:24:15.204040 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-20 02:24:15.204044 | orchestrator | 2026-02-20 02:24:15.204047 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-20 02:24:15.204051 | orchestrator | Friday 20 February 2026 02:24:12 +0000 (0:00:01.349) 0:00:09.284 ******* 2026-02-20 02:24:15.204055 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-20 02:24:15.204059 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:24:15.204095 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-20 02:24:15.204099 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:24:15.204103 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-20 02:24:15.204106 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:24:15.204110 | orchestrator | 2026-02-20 02:24:15.204116 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-20 02:24:15.204123 | orchestrator | Friday 20 February 2026 02:24:13 +0000 (0:00:00.549) 0:00:09.833 ******* 2026-02-20 02:24:15.204132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-20 02:24:15.204143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-20 02:24:15.204151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-20 02:24:15.204163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 02:24:15.204168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 02:24:15.204180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 02:24:20.787235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 02:24:20.787327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 02:24:20.787341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 02:24:20.787380 | orchestrator | 2026-02-20 02:24:20.787396 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-20 02:24:20.787414 | orchestrator | Friday 20 February 2026 02:24:15 +0000 (0:00:02.002) 0:00:11.836 ******* 2026-02-20 02:24:20.787432 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:24:20.787450 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:24:20.787464 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:24:20.787473 | orchestrator | 2026-02-20 02:24:20.787483 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-20 02:24:20.787493 | orchestrator | Friday 20 February 2026 02:24:16 +0000 (0:00:00.951) 0:00:12.788 ******* 2026-02-20 02:24:20.787503 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-20 02:24:20.787513 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-20 02:24:20.787523 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-20 02:24:20.787532 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-20 02:24:20.787542 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-20 02:24:20.787551 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-20 02:24:20.787563 | orchestrator | 2026-02-20 02:24:20.787579 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-20 02:24:20.787600 | orchestrator | Friday 20 February 2026 02:24:17 +0000 (0:00:01.531) 0:00:14.319 ******* 2026-02-20 02:24:20.787622 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:24:20.787638 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:24:20.787654 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:24:20.787670 | orchestrator | 2026-02-20 02:24:20.787685 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-20 02:24:20.787700 | orchestrator | Friday 20 February 2026 02:24:18 +0000 (0:00:01.010) 0:00:15.330 ******* 2026-02-20 02:24:20.787714 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:24:20.787728 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:24:20.787744 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:24:20.787760 | orchestrator | 2026-02-20 02:24:20.787775 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-20 02:24:20.787791 | orchestrator | Friday 20 February 2026 02:24:20 +0000 (0:00:01.458) 0:00:16.788 ******* 2026-02-20 02:24:20.787809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-20 02:24:20.787854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:20.787871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:20.787908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9a69dcb8e4243316c954d5a7627d38c93169d5b2', '__omit_place_holder__9a69dcb8e4243316c954d5a7627d38c93169d5b2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-20 02:24:20.787926 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:24:20.787946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-20 02:24:20.788054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:20.788083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:20.788106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9a69dcb8e4243316c954d5a7627d38c93169d5b2', '__omit_place_holder__9a69dcb8e4243316c954d5a7627d38c93169d5b2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-20 02:24:20.788124 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:24:20.788154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-20 02:24:23.680268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:23.680397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:23.680413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9a69dcb8e4243316c954d5a7627d38c93169d5b2', '__omit_place_holder__9a69dcb8e4243316c954d5a7627d38c93169d5b2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-20 02:24:23.680424 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:24:23.680435 | orchestrator | 2026-02-20 02:24:23.680445 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-20 02:24:23.680455 | orchestrator | Friday 20 February 2026 02:24:20 +0000 (0:00:00.627) 0:00:17.416 ******* 2026-02-20 02:24:23.680464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-20 02:24:23.680475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-20 02:24:23.680519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-20 02:24:23.680578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 02:24:23.680606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:23.680620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9a69dcb8e4243316c954d5a7627d38c93169d5b2', '__omit_place_holder__9a69dcb8e4243316c954d5a7627d38c93169d5b2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-20 02:24:23.680636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 02:24:23.680652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:23.680686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9a69dcb8e4243316c954d5a7627d38c93169d5b2', '__omit_place_holder__9a69dcb8e4243316c954d5a7627d38c93169d5b2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-20 02:24:23.680728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 02:24:32.438846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:32.438922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9a69dcb8e4243316c954d5a7627d38c93169d5b2', '__omit_place_holder__9a69dcb8e4243316c954d5a7627d38c93169d5b2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-20 02:24:32.438932 | orchestrator | 2026-02-20 02:24:32.438940 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-20 02:24:32.438985 | orchestrator | Friday 20 February 2026 02:24:23 +0000 (0:00:02.896) 0:00:20.313 ******* 2026-02-20 02:24:32.438991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-20 02:24:32.438997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-20 02:24:32.439027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-20 02:24:32.439031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 02:24:32.439047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 02:24:32.439051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 02:24:32.439055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 02:24:32.439060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 02:24:32.439064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 02:24:32.439072 | orchestrator | 2026-02-20 02:24:32.439076 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-20 02:24:32.439080 | orchestrator | Friday 20 February 2026 02:24:26 +0000 (0:00:03.306) 0:00:23.620 ******* 2026-02-20 02:24:32.439084 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-20 02:24:32.439090 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-20 02:24:32.439094 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-20 02:24:32.439097 | orchestrator | 2026-02-20 02:24:32.439101 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-20 02:24:32.439105 | orchestrator | Friday 20 February 2026 02:24:28 +0000 (0:00:01.895) 0:00:25.515 ******* 2026-02-20 02:24:32.439109 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-20 02:24:32.439114 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-20 02:24:32.439117 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-20 02:24:32.439121 | orchestrator | 2026-02-20 02:24:32.439125 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-20 02:24:32.439129 | orchestrator | Friday 20 February 2026 02:24:31 +0000 (0:00:02.991) 0:00:28.506 ******* 2026-02-20 02:24:32.439133 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:24:32.439138 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:24:32.439142 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:24:32.439146 | orchestrator | 2026-02-20 02:24:32.439155 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-20 02:24:44.704260 | orchestrator | Friday 20 February 2026 02:24:32 +0000 (0:00:00.567) 0:00:29.073 ******* 2026-02-20 02:24:44.704335 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-20 02:24:44.704353 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-20 02:24:44.704359 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-20 02:24:44.704364 | orchestrator | 2026-02-20 02:24:44.704370 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-20 02:24:44.704376 | orchestrator | Friday 20 February 2026 02:24:34 +0000 (0:00:02.257) 0:00:31.331 ******* 2026-02-20 02:24:44.704382 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-20 02:24:44.704388 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-20 02:24:44.704393 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-20 02:24:44.704398 | orchestrator | 2026-02-20 02:24:44.704403 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-20 02:24:44.704408 | orchestrator | Friday 20 February 2026 02:24:36 +0000 (0:00:02.199) 0:00:33.531 ******* 2026-02-20 02:24:44.704415 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-20 02:24:44.704421 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-20 02:24:44.704426 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-20 02:24:44.704431 | orchestrator | 2026-02-20 02:24:44.704436 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-20 02:24:44.704473 | orchestrator | Friday 20 February 2026 02:24:38 +0000 (0:00:01.588) 0:00:35.120 ******* 2026-02-20 02:24:44.704480 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-20 02:24:44.704486 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-20 02:24:44.704491 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-20 02:24:44.704496 | orchestrator | 2026-02-20 02:24:44.704501 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-20 02:24:44.704506 | orchestrator | Friday 20 February 2026 02:24:39 +0000 (0:00:01.467) 0:00:36.587 ******* 2026-02-20 02:24:44.704511 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:24:44.704516 | orchestrator | 2026-02-20 02:24:44.704521 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-20 02:24:44.704526 | orchestrator | Friday 20 February 2026 02:24:40 +0000 (0:00:00.575) 0:00:37.162 ******* 2026-02-20 02:24:44.704533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-20 02:24:44.704545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-20 02:24:44.704550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-20 02:24:44.704571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 02:24:44.704581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 02:24:44.704598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 02:24:44.704610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 02:24:44.704620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 02:24:44.704634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 02:24:44.704645 | orchestrator | 2026-02-20 02:24:44.704650 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-20 02:24:44.704656 | orchestrator | Friday 20 February 2026 02:24:44 +0000 (0:00:03.584) 0:00:40.747 ******* 2026-02-20 02:24:44.704666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-20 02:24:45.504527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:45.504619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:45.504627 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:24:45.504636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-20 02:24:45.504641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:45.504669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:45.504674 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:24:45.504680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-20 02:24:45.504697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:45.504707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:45.504713 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:24:45.504718 | orchestrator | 2026-02-20 02:24:45.504725 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-20 02:24:45.504731 | orchestrator | Friday 20 February 2026 02:24:44 +0000 (0:00:00.593) 0:00:41.340 ******* 2026-02-20 02:24:45.504737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-20 02:24:45.504743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:45.504752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:45.504757 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:24:45.504762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-20 02:24:45.504771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:46.368739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:46.368828 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:24:46.368840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-20 02:24:46.368851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:46.368859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:46.368866 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:24:46.368873 | orchestrator | 2026-02-20 02:24:46.368881 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-20 02:24:46.368889 | orchestrator | Friday 20 February 2026 02:24:45 +0000 (0:00:00.800) 0:00:42.140 ******* 2026-02-20 02:24:46.368911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-20 02:24:46.368919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:46.369017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:46.369027 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:24:46.369037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-20 02:24:46.369048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:46.369059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:46.369070 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:24:46.369082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-20 02:24:46.369102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:46.369122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:46.369141 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:24:47.814234 | orchestrator | 2026-02-20 02:24:47.814329 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-20 02:24:47.814355 | orchestrator | Friday 20 February 2026 02:24:46 +0000 (0:00:00.859) 0:00:42.999 ******* 2026-02-20 02:24:47.814391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-20 02:24:47.814411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:47.814427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:47.814443 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:24:47.814460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-20 02:24:47.814496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:47.814541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:47.814558 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:24:47.814618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-20 02:24:47.814638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:47.814656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:47.814671 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:24:47.814687 | orchestrator | 2026-02-20 02:24:47.814703 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-20 02:24:47.814719 | orchestrator | Friday 20 February 2026 02:24:46 +0000 (0:00:00.601) 0:00:43.601 ******* 2026-02-20 02:24:47.814736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-20 02:24:47.814761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:47.814800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:47.814817 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:24:47.814848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-20 02:24:48.935706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:48.935775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:48.935792 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:24:48.935808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-20 02:24:48.935834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:48.935875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:48.935888 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:24:48.935900 | orchestrator | 2026-02-20 02:24:48.935915 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-20 02:24:48.935982 | orchestrator | Friday 20 February 2026 02:24:47 +0000 (0:00:00.846) 0:00:44.448 ******* 2026-02-20 02:24:48.935998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-20 02:24:48.936035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:48.936049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:48.936094 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:24:48.936108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-20 02:24:48.936120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:48.936163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:48.936177 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:24:48.936191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-20 02:24:48.936212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:50.434109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:50.434184 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:24:50.434191 | orchestrator | 2026-02-20 02:24:50.434204 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-20 02:24:50.434210 | orchestrator | Friday 20 February 2026 02:24:48 +0000 (0:00:01.114) 0:00:45.562 ******* 2026-02-20 02:24:50.434216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-20 02:24:50.434222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:50.434263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:50.434272 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:24:50.434278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-20 02:24:50.434284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:50.434305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:50.434311 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:24:50.434317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-20 02:24:50.434323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:50.434334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:50.434339 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:24:50.434345 | orchestrator | 2026-02-20 02:24:50.434351 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-20 02:24:50.434356 | orchestrator | Friday 20 February 2026 02:24:49 +0000 (0:00:00.608) 0:00:46.171 ******* 2026-02-20 02:24:50.434362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-20 02:24:50.434367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:50.434383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:57.425173 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:24:57.425293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-20 02:24:57.425313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:57.425353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:57.425366 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:24:57.425391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-20 02:24:57.425403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 02:24:57.425413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 02:24:57.425423 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:24:57.425433 | orchestrator | 2026-02-20 02:24:57.425451 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-20 02:24:57.425469 | orchestrator | Friday 20 February 2026 02:24:50 +0000 (0:00:00.894) 0:00:47.065 ******* 2026-02-20 02:24:57.425485 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-20 02:24:57.425524 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-20 02:24:57.425543 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-20 02:24:57.425560 | orchestrator | 2026-02-20 02:24:57.425577 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-20 02:24:57.425594 | orchestrator | Friday 20 February 2026 02:24:52 +0000 (0:00:01.717) 0:00:48.783 ******* 2026-02-20 02:24:57.425623 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-20 02:24:57.425640 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-20 02:24:57.425657 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-20 02:24:57.425674 | orchestrator | 2026-02-20 02:24:57.425692 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-20 02:24:57.425710 | orchestrator | Friday 20 February 2026 02:24:53 +0000 (0:00:01.711) 0:00:50.495 ******* 2026-02-20 02:24:57.425728 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-20 02:24:57.425745 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-20 02:24:57.425762 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-20 02:24:57.425778 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-20 02:24:57.425795 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:24:57.425813 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-20 02:24:57.425830 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:24:57.425848 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-20 02:24:57.425866 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:24:57.425883 | orchestrator | 2026-02-20 02:24:57.425899 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-20 02:24:57.425916 | orchestrator | Friday 20 February 2026 02:24:54 +0000 (0:00:00.932) 0:00:51.428 ******* 2026-02-20 02:24:57.426098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-20 02:24:57.426130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-20 02:24:57.426149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-20 02:24:57.426184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 02:25:02.206843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 02:25:02.206954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 02:25:02.206965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 02:25:02.206982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 02:25:02.206986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 02:25:02.206991 | orchestrator | 2026-02-20 02:25:02.206996 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-20 02:25:02.207001 | orchestrator | Friday 20 February 2026 02:24:57 +0000 (0:00:02.632) 0:00:54.060 ******* 2026-02-20 02:25:02.207006 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:25:02.207010 | orchestrator | 2026-02-20 02:25:02.207014 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-20 02:25:02.207033 | orchestrator | Friday 20 February 2026 02:24:58 +0000 (0:00:00.938) 0:00:54.999 ******* 2026-02-20 02:25:02.207048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 02:25:02.207054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 02:25:02.207059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 02:25:02.207063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 02:25:02.207070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 02:25:02.207074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 02:25:02.207082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 02:25:02.207091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 02:25:02.872319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 02:25:02.872406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 02:25:02.872433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 02:25:02.872440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 02:25:02.872465 | orchestrator | 2026-02-20 02:25:02.872473 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-20 02:25:02.872482 | orchestrator | Friday 20 February 2026 02:25:02 +0000 (0:00:03.836) 0:00:58.835 ******* 2026-02-20 02:25:02.872490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-20 02:25:02.872510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 02:25:02.872517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 02:25:02.872524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 02:25:02.872530 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:25:02.872541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-20 02:25:02.872553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 02:25:02.872559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 02:25:02.872566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 02:25:02.872573 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:25:02.872585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-20 02:25:12.080084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 02:25:12.080203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 02:25:12.080238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 02:25:12.080248 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:25:12.080258 | orchestrator | 2026-02-20 02:25:12.080267 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-20 02:25:12.080277 | orchestrator | Friday 20 February 2026 02:25:02 +0000 (0:00:00.671) 0:00:59.506 ******* 2026-02-20 02:25:12.080286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-20 02:25:12.080296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-20 02:25:12.080305 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:25:12.080328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-20 02:25:12.080336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-20 02:25:12.080344 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:25:12.080352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-20 02:25:12.080361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-20 02:25:12.080368 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:25:12.080376 | orchestrator | 2026-02-20 02:25:12.080384 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-20 02:25:12.080392 | orchestrator | Friday 20 February 2026 02:25:04 +0000 (0:00:01.288) 0:01:00.795 ******* 2026-02-20 02:25:12.080400 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:25:12.080409 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:25:12.080417 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:25:12.080425 | orchestrator | 2026-02-20 02:25:12.080433 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-20 02:25:12.080440 | orchestrator | Friday 20 February 2026 02:25:05 +0000 (0:00:01.369) 0:01:02.165 ******* 2026-02-20 02:25:12.080448 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:25:12.080456 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:25:12.080464 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:25:12.080472 | orchestrator | 2026-02-20 02:25:12.080479 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-20 02:25:12.080487 | orchestrator | Friday 20 February 2026 02:25:07 +0000 (0:00:02.124) 0:01:04.290 ******* 2026-02-20 02:25:12.080495 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:25:12.080503 | orchestrator | 2026-02-20 02:25:12.080525 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-20 02:25:12.080534 | orchestrator | Friday 20 February 2026 02:25:08 +0000 (0:00:00.701) 0:01:04.992 ******* 2026-02-20 02:25:12.080556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 02:25:12.080569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 02:25:12.080581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:25:12.080591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 02:25:12.080601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 02:25:12.080628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:25:12.711361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 02:25:12.711442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 02:25:12.711451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:25:12.711458 | orchestrator | 2026-02-20 02:25:12.711465 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-20 02:25:12.711472 | orchestrator | Friday 20 February 2026 02:25:12 +0000 (0:00:03.722) 0:01:08.714 ******* 2026-02-20 02:25:12.711478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-20 02:25:12.711484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 02:25:12.711531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:25:12.711538 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:25:12.711545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-20 02:25:12.711551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 02:25:12.711557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:25:12.711562 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:25:12.711568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-20 02:25:12.711588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 02:25:22.817377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:25:22.817476 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:25:22.817488 | orchestrator | 2026-02-20 02:25:22.817499 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-20 02:25:22.817511 | orchestrator | Friday 20 February 2026 02:25:12 +0000 (0:00:00.627) 0:01:09.341 ******* 2026-02-20 02:25:22.817523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-20 02:25:22.817535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-20 02:25:22.817546 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:25:22.817558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-20 02:25:22.817568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-20 02:25:22.817580 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:25:22.817591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-20 02:25:22.817602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-20 02:25:22.817612 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:25:22.817621 | orchestrator | 2026-02-20 02:25:22.817628 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-20 02:25:22.817656 | orchestrator | Friday 20 February 2026 02:25:13 +0000 (0:00:00.915) 0:01:10.256 ******* 2026-02-20 02:25:22.817663 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:25:22.817669 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:25:22.817676 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:25:22.817682 | orchestrator | 2026-02-20 02:25:22.817688 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-20 02:25:22.817694 | orchestrator | Friday 20 February 2026 02:25:15 +0000 (0:00:01.615) 0:01:11.872 ******* 2026-02-20 02:25:22.817701 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:25:22.817707 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:25:22.817713 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:25:22.817719 | orchestrator | 2026-02-20 02:25:22.817725 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-20 02:25:22.817732 | orchestrator | Friday 20 February 2026 02:25:17 +0000 (0:00:02.053) 0:01:13.925 ******* 2026-02-20 02:25:22.817738 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:25:22.817744 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:25:22.817750 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:25:22.817756 | orchestrator | 2026-02-20 02:25:22.817762 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-20 02:25:22.817769 | orchestrator | Friday 20 February 2026 02:25:17 +0000 (0:00:00.318) 0:01:14.244 ******* 2026-02-20 02:25:22.817775 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:25:22.817781 | orchestrator | 2026-02-20 02:25:22.817787 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-20 02:25:22.817793 | orchestrator | Friday 20 February 2026 02:25:18 +0000 (0:00:00.691) 0:01:14.936 ******* 2026-02-20 02:25:22.817828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-20 02:25:22.817838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-20 02:25:22.817845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-20 02:25:22.817857 | orchestrator | 2026-02-20 02:25:22.817863 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-20 02:25:22.817870 | orchestrator | Friday 20 February 2026 02:25:21 +0000 (0:00:03.076) 0:01:18.012 ******* 2026-02-20 02:25:22.817877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-20 02:25:22.817884 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:25:22.817891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-20 02:25:22.817920 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:25:22.817937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-20 02:25:31.236667 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:25:31.236821 | orchestrator | 2026-02-20 02:25:31.236845 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-20 02:25:31.236862 | orchestrator | Friday 20 February 2026 02:25:22 +0000 (0:00:01.434) 0:01:19.447 ******* 2026-02-20 02:25:31.236882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-20 02:25:31.236981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-20 02:25:31.237034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-20 02:25:31.237055 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:25:31.237071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-20 02:25:31.237088 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:25:31.237106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-20 02:25:31.237123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-20 02:25:31.237139 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:25:31.237149 | orchestrator | 2026-02-20 02:25:31.237159 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-20 02:25:31.237169 | orchestrator | Friday 20 February 2026 02:25:24 +0000 (0:00:01.744) 0:01:21.192 ******* 2026-02-20 02:25:31.237179 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:25:31.237193 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:25:31.237205 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:25:31.237215 | orchestrator | 2026-02-20 02:25:31.237227 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-20 02:25:31.237238 | orchestrator | Friday 20 February 2026 02:25:24 +0000 (0:00:00.442) 0:01:21.634 ******* 2026-02-20 02:25:31.237249 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:25:31.237260 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:25:31.237271 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:25:31.237281 | orchestrator | 2026-02-20 02:25:31.237308 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-20 02:25:31.237320 | orchestrator | Friday 20 February 2026 02:25:26 +0000 (0:00:01.449) 0:01:23.084 ******* 2026-02-20 02:25:31.237332 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:25:31.237343 | orchestrator | 2026-02-20 02:25:31.237354 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-20 02:25:31.237365 | orchestrator | Friday 20 February 2026 02:25:27 +0000 (0:00:01.028) 0:01:24.112 ******* 2026-02-20 02:25:31.237402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 02:25:31.237431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 02:25:31.237444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:25:31.237456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:25:31.237468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 02:25:31.237486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 02:25:31.921827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 02:25:31.921951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 02:25:31.921970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 02:25:31.921984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:25:31.922074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 02:25:31.922144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 02:25:31.922159 | orchestrator | 2026-02-20 02:25:31.922173 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-20 02:25:31.922187 | orchestrator | Friday 20 February 2026 02:25:31 +0000 (0:00:03.847) 0:01:27.960 ******* 2026-02-20 02:25:31.922217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-20 02:25:31.922241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:25:31.922254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 02:25:31.922274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 02:25:31.922299 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:25:31.922323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-20 02:25:38.871611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:25:38.871707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 02:25:38.871722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 02:25:38.871733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-20 02:25:38.871770 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:25:38.871798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:25:38.871826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 02:25:38.871837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 02:25:38.871847 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:25:38.871858 | orchestrator | 2026-02-20 02:25:38.871869 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-20 02:25:38.871880 | orchestrator | Friday 20 February 2026 02:25:32 +0000 (0:00:00.696) 0:01:28.656 ******* 2026-02-20 02:25:38.871941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-20 02:25:38.871952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-20 02:25:38.871964 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:25:38.871974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-20 02:25:38.871984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-20 02:25:38.871993 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:25:38.872003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-20 02:25:38.872013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-20 02:25:38.872032 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:25:38.872042 | orchestrator | 2026-02-20 02:25:38.872082 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-20 02:25:38.872092 | orchestrator | Friday 20 February 2026 02:25:33 +0000 (0:00:01.318) 0:01:29.974 ******* 2026-02-20 02:25:38.872102 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:25:38.872112 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:25:38.872122 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:25:38.872133 | orchestrator | 2026-02-20 02:25:38.872145 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-20 02:25:38.872156 | orchestrator | Friday 20 February 2026 02:25:34 +0000 (0:00:01.579) 0:01:31.554 ******* 2026-02-20 02:25:38.872167 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:25:38.872178 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:25:38.872196 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:25:38.872208 | orchestrator | 2026-02-20 02:25:38.872219 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-20 02:25:38.872230 | orchestrator | Friday 20 February 2026 02:25:37 +0000 (0:00:02.231) 0:01:33.786 ******* 2026-02-20 02:25:38.872241 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:25:38.872251 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:25:38.872262 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:25:38.872273 | orchestrator | 2026-02-20 02:25:38.872284 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-20 02:25:38.872295 | orchestrator | Friday 20 February 2026 02:25:37 +0000 (0:00:00.320) 0:01:34.106 ******* 2026-02-20 02:25:38.872306 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:25:38.872318 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:25:38.872328 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:25:38.872339 | orchestrator | 2026-02-20 02:25:38.872351 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-20 02:25:38.872362 | orchestrator | Friday 20 February 2026 02:25:37 +0000 (0:00:00.348) 0:01:34.455 ******* 2026-02-20 02:25:38.872373 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:25:38.872384 | orchestrator | 2026-02-20 02:25:38.872395 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-20 02:25:38.872406 | orchestrator | Friday 20 February 2026 02:25:38 +0000 (0:00:01.053) 0:01:35.509 ******* 2026-02-20 02:25:42.367607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 02:25:42.367677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 02:25:42.367701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 02:25:42.367706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 02:25:42.367721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 02:25:42.367726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:25:42.367740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-20 02:25:42.367744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 02:25:42.367752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 02:25:42.367756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 02:25:42.367763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 02:25:42.367767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 02:25:42.367775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.310474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.310577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 02:25:43.310638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 02:25:43.310666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.310677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.310688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.310714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.310725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.310745 | orchestrator | 2026-02-20 02:25:43.310757 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-20 02:25:43.310768 | orchestrator | Friday 20 February 2026 02:25:42 +0000 (0:00:03.793) 0:01:39.302 ******* 2026-02-20 02:25:43.310779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 02:25:43.310795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 02:25:43.310806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.310816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.310834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.842451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.842525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.842535 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:25:43.842545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 02:25:43.842554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 02:25:43.843070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.843105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.843149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.843163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.843171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.843179 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:25:43.843189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 02:25:43.843198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 02:25:43.843206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 02:25:43.843227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 02:25:54.552535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 02:25:54.552628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:25:54.552638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-20 02:25:54.552645 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:25:54.552651 | orchestrator | 2026-02-20 02:25:54.552656 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-20 02:25:54.552662 | orchestrator | Friday 20 February 2026 02:25:43 +0000 (0:00:01.169) 0:01:40.471 ******* 2026-02-20 02:25:54.552668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-20 02:25:54.552674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-20 02:25:54.552680 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:25:54.552685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-20 02:25:54.552704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-20 02:25:54.552708 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:25:54.552713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-20 02:25:54.552717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-20 02:25:54.552722 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:25:54.552727 | orchestrator | 2026-02-20 02:25:54.552732 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-20 02:25:54.552737 | orchestrator | Friday 20 February 2026 02:25:45 +0000 (0:00:01.417) 0:01:41.889 ******* 2026-02-20 02:25:54.552741 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:25:54.552746 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:25:54.552750 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:25:54.552755 | orchestrator | 2026-02-20 02:25:54.552759 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-20 02:25:54.552764 | orchestrator | Friday 20 February 2026 02:25:46 +0000 (0:00:01.370) 0:01:43.259 ******* 2026-02-20 02:25:54.552768 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:25:54.552773 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:25:54.552777 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:25:54.552782 | orchestrator | 2026-02-20 02:25:54.552786 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-20 02:25:54.552790 | orchestrator | Friday 20 February 2026 02:25:48 +0000 (0:00:02.282) 0:01:45.542 ******* 2026-02-20 02:25:54.552804 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:25:54.552809 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:25:54.552813 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:25:54.552818 | orchestrator | 2026-02-20 02:25:54.552822 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-20 02:25:54.552827 | orchestrator | Friday 20 February 2026 02:25:49 +0000 (0:00:00.316) 0:01:45.858 ******* 2026-02-20 02:25:54.552832 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:25:54.552836 | orchestrator | 2026-02-20 02:25:54.552843 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-20 02:25:54.552848 | orchestrator | Friday 20 February 2026 02:25:50 +0000 (0:00:01.130) 0:01:46.989 ******* 2026-02-20 02:25:54.552855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 02:25:54.552866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-20 02:25:54.552917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 02:25:57.706897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-20 02:25:57.706980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 02:25:57.706999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-20 02:25:57.707019 | orchestrator | 2026-02-20 02:25:57.707025 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-20 02:25:57.707030 | orchestrator | Friday 20 February 2026 02:25:54 +0000 (0:00:04.313) 0:01:51.303 ******* 2026-02-20 02:25:57.707039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-20 02:25:57.707047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-20 02:26:01.451414 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:26:01.451531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-20 02:26:01.451553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-20 02:26:01.451594 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:26:01.451626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-20 02:26:01.451646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-20 02:26:01.451670 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:26:01.451682 | orchestrator | 2026-02-20 02:26:01.451694 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-20 02:26:01.451706 | orchestrator | Friday 20 February 2026 02:25:57 +0000 (0:00:03.135) 0:01:54.438 ******* 2026-02-20 02:26:01.451719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-20 02:26:01.451741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-20 02:26:10.105829 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:26:10.105949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-20 02:26:10.105959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-20 02:26:10.105964 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:26:10.105983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-20 02:26:10.105988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-20 02:26:10.106007 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:26:10.106047 | orchestrator | 2026-02-20 02:26:10.106052 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-20 02:26:10.106058 | orchestrator | Friday 20 February 2026 02:26:01 +0000 (0:00:03.644) 0:01:58.083 ******* 2026-02-20 02:26:10.106062 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:26:10.106066 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:26:10.106069 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:26:10.106073 | orchestrator | 2026-02-20 02:26:10.106077 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-20 02:26:10.106081 | orchestrator | Friday 20 February 2026 02:26:02 +0000 (0:00:01.346) 0:01:59.429 ******* 2026-02-20 02:26:10.106085 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:26:10.106089 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:26:10.106093 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:26:10.106096 | orchestrator | 2026-02-20 02:26:10.106100 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-20 02:26:10.106104 | orchestrator | Friday 20 February 2026 02:26:04 +0000 (0:00:02.133) 0:02:01.563 ******* 2026-02-20 02:26:10.106108 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:26:10.106112 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:26:10.106116 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:26:10.106119 | orchestrator | 2026-02-20 02:26:10.106123 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-20 02:26:10.106127 | orchestrator | Friday 20 February 2026 02:26:05 +0000 (0:00:00.323) 0:02:01.886 ******* 2026-02-20 02:26:10.106131 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:26:10.106135 | orchestrator | 2026-02-20 02:26:10.106139 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-20 02:26:10.106143 | orchestrator | Friday 20 February 2026 02:26:06 +0000 (0:00:01.109) 0:02:02.995 ******* 2026-02-20 02:26:10.106157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 02:26:10.106164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 02:26:10.106168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 02:26:10.106177 | orchestrator | 2026-02-20 02:26:10.106181 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-20 02:26:10.106186 | orchestrator | Friday 20 February 2026 02:26:09 +0000 (0:00:03.115) 0:02:06.111 ******* 2026-02-20 02:26:10.106190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-20 02:26:10.106194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-20 02:26:10.106198 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:26:10.106202 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:26:10.106206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-20 02:26:10.106210 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:26:10.106214 | orchestrator | 2026-02-20 02:26:10.106265 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-20 02:26:10.106273 | orchestrator | Friday 20 February 2026 02:26:09 +0000 (0:00:00.399) 0:02:06.511 ******* 2026-02-20 02:26:10.106278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-20 02:26:10.106288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-20 02:26:18.798315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-20 02:26:18.798387 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:26:18.798394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-20 02:26:18.798415 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:26:18.798421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-20 02:26:18.798425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-20 02:26:18.798430 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:26:18.798434 | orchestrator | 2026-02-20 02:26:18.798439 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-20 02:26:18.798445 | orchestrator | Friday 20 February 2026 02:26:10 +0000 (0:00:00.895) 0:02:07.406 ******* 2026-02-20 02:26:18.798449 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:26:18.798453 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:26:18.798468 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:26:18.798473 | orchestrator | 2026-02-20 02:26:18.798477 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-20 02:26:18.798481 | orchestrator | Friday 20 February 2026 02:26:12 +0000 (0:00:01.313) 0:02:08.719 ******* 2026-02-20 02:26:18.798485 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:26:18.798489 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:26:18.798493 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:26:18.798498 | orchestrator | 2026-02-20 02:26:18.798502 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-20 02:26:18.798506 | orchestrator | Friday 20 February 2026 02:26:14 +0000 (0:00:02.029) 0:02:10.748 ******* 2026-02-20 02:26:18.798510 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:26:18.798514 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:26:18.798518 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:26:18.798522 | orchestrator | 2026-02-20 02:26:18.798526 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-20 02:26:18.798531 | orchestrator | Friday 20 February 2026 02:26:14 +0000 (0:00:00.322) 0:02:11.071 ******* 2026-02-20 02:26:18.798535 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:26:18.798539 | orchestrator | 2026-02-20 02:26:18.798544 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-20 02:26:18.798548 | orchestrator | Friday 20 February 2026 02:26:15 +0000 (0:00:01.152) 0:02:12.223 ******* 2026-02-20 02:26:18.798567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-20 02:26:18.798582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-20 02:26:18.798593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-20 02:26:20.448433 | orchestrator | 2026-02-20 02:26:20.448508 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-20 02:26:20.448516 | orchestrator | Friday 20 February 2026 02:26:18 +0000 (0:00:03.208) 0:02:15.432 ******* 2026-02-20 02:26:20.448538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-20 02:26:20.448548 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:26:20.448567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-20 02:26:20.448590 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:26:20.448599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-20 02:26:20.448605 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:26:20.448609 | orchestrator | 2026-02-20 02:26:20.448619 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-20 02:26:20.448623 | orchestrator | Friday 20 February 2026 02:26:19 +0000 (0:00:00.662) 0:02:16.095 ******* 2026-02-20 02:26:20.448629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-20 02:26:20.448639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-20 02:26:20.448651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-20 02:26:20.448666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-20 02:26:29.122576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-20 02:26:29.122652 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:26:29.122660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-20 02:26:29.122680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-20 02:26:29.122687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-20 02:26:29.122692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-20 02:26:29.122697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-20 02:26:29.122701 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:26:29.122707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-20 02:26:29.122714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-20 02:26:29.122738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-20 02:26:29.122744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-20 02:26:29.122751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-20 02:26:29.122757 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:26:29.122763 | orchestrator | 2026-02-20 02:26:29.122770 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-20 02:26:29.122777 | orchestrator | Friday 20 February 2026 02:26:20 +0000 (0:00:00.989) 0:02:17.084 ******* 2026-02-20 02:26:29.122784 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:26:29.122790 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:26:29.122797 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:26:29.122805 | orchestrator | 2026-02-20 02:26:29.122809 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-20 02:26:29.122812 | orchestrator | Friday 20 February 2026 02:26:22 +0000 (0:00:01.630) 0:02:18.714 ******* 2026-02-20 02:26:29.122816 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:26:29.122820 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:26:29.122823 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:26:29.122827 | orchestrator | 2026-02-20 02:26:29.122831 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-20 02:26:29.122835 | orchestrator | Friday 20 February 2026 02:26:24 +0000 (0:00:02.071) 0:02:20.786 ******* 2026-02-20 02:26:29.122838 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:26:29.122842 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:26:29.122891 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:26:29.122896 | orchestrator | 2026-02-20 02:26:29.122900 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-20 02:26:29.122904 | orchestrator | Friday 20 February 2026 02:26:24 +0000 (0:00:00.302) 0:02:21.088 ******* 2026-02-20 02:26:29.122907 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:26:29.122911 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:26:29.122915 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:26:29.122919 | orchestrator | 2026-02-20 02:26:29.122923 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-20 02:26:29.122926 | orchestrator | Friday 20 February 2026 02:26:24 +0000 (0:00:00.296) 0:02:21.385 ******* 2026-02-20 02:26:29.122930 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:26:29.122934 | orchestrator | 2026-02-20 02:26:29.122941 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-20 02:26:29.122945 | orchestrator | Friday 20 February 2026 02:26:25 +0000 (0:00:01.205) 0:02:22.590 ******* 2026-02-20 02:26:29.122952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 02:26:29.122964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 02:26:29.122969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 02:26:29.122974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 02:26:29.122982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 02:26:29.734666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 02:26:29.734796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 02:26:29.734814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 02:26:29.734827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 02:26:29.734839 | orchestrator | 2026-02-20 02:26:29.734887 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-20 02:26:29.734907 | orchestrator | Friday 20 February 2026 02:26:29 +0000 (0:00:03.162) 0:02:25.752 ******* 2026-02-20 02:26:29.734959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-20 02:26:29.734976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 02:26:29.734997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 02:26:29.735009 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:26:29.735023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-20 02:26:29.735035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 02:26:29.735047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 02:26:29.735058 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:26:29.735091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-20 02:26:39.022312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 02:26:39.022410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 02:26:39.022424 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:26:39.022438 | orchestrator | 2026-02-20 02:26:39.022451 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-20 02:26:39.022463 | orchestrator | Friday 20 February 2026 02:26:29 +0000 (0:00:00.612) 0:02:26.364 ******* 2026-02-20 02:26:39.022477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-20 02:26:39.022492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-20 02:26:39.022505 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:26:39.022516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-20 02:26:39.022527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-20 02:26:39.022539 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:26:39.022550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-20 02:26:39.022562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-20 02:26:39.022597 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:26:39.022609 | orchestrator | 2026-02-20 02:26:39.022620 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-20 02:26:39.022631 | orchestrator | Friday 20 February 2026 02:26:30 +0000 (0:00:01.086) 0:02:27.451 ******* 2026-02-20 02:26:39.022642 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:26:39.022653 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:26:39.022664 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:26:39.022674 | orchestrator | 2026-02-20 02:26:39.022699 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-20 02:26:39.022711 | orchestrator | Friday 20 February 2026 02:26:32 +0000 (0:00:01.352) 0:02:28.803 ******* 2026-02-20 02:26:39.022722 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:26:39.022733 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:26:39.022743 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:26:39.022783 | orchestrator | 2026-02-20 02:26:39.022794 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-20 02:26:39.022805 | orchestrator | Friday 20 February 2026 02:26:34 +0000 (0:00:02.065) 0:02:30.868 ******* 2026-02-20 02:26:39.022816 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:26:39.022828 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:26:39.022881 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:26:39.022901 | orchestrator | 2026-02-20 02:26:39.022919 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-20 02:26:39.022959 | orchestrator | Friday 20 February 2026 02:26:34 +0000 (0:00:00.322) 0:02:31.190 ******* 2026-02-20 02:26:39.022981 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:26:39.023002 | orchestrator | 2026-02-20 02:26:39.023021 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-20 02:26:39.023040 | orchestrator | Friday 20 February 2026 02:26:35 +0000 (0:00:01.209) 0:02:32.400 ******* 2026-02-20 02:26:39.023060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 02:26:39.023085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 02:26:39.023105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 02:26:39.023141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 02:26:39.023177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 02:26:44.294396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 02:26:44.294507 | orchestrator | 2026-02-20 02:26:44.294540 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-20 02:26:44.294569 | orchestrator | Friday 20 February 2026 02:26:39 +0000 (0:00:03.251) 0:02:35.651 ******* 2026-02-20 02:26:44.294592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-20 02:26:44.294702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 02:26:44.294729 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:26:44.294759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-20 02:26:44.294806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 02:26:44.294821 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:26:44.294832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-20 02:26:44.294911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 02:26:44.294945 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:26:44.294974 | orchestrator | 2026-02-20 02:26:44.294995 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-20 02:26:44.295015 | orchestrator | Friday 20 February 2026 02:26:39 +0000 (0:00:00.667) 0:02:36.319 ******* 2026-02-20 02:26:44.295036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-20 02:26:44.295056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-20 02:26:44.295079 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:26:44.295099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-20 02:26:44.295122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-20 02:26:44.295143 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:26:44.295171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-20 02:26:44.295185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-20 02:26:44.295197 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:26:44.295207 | orchestrator | 2026-02-20 02:26:44.295218 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-20 02:26:44.295229 | orchestrator | Friday 20 February 2026 02:26:40 +0000 (0:00:00.883) 0:02:37.202 ******* 2026-02-20 02:26:44.295240 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:26:44.295251 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:26:44.295262 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:26:44.295272 | orchestrator | 2026-02-20 02:26:44.295283 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-20 02:26:44.295294 | orchestrator | Friday 20 February 2026 02:26:42 +0000 (0:00:01.645) 0:02:38.847 ******* 2026-02-20 02:26:44.295305 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:26:44.295316 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:26:44.295327 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:26:44.295338 | orchestrator | 2026-02-20 02:26:44.295349 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-20 02:26:44.295371 | orchestrator | Friday 20 February 2026 02:26:44 +0000 (0:00:02.080) 0:02:40.928 ******* 2026-02-20 02:26:48.742248 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:26:48.742348 | orchestrator | 2026-02-20 02:26:48.742363 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-20 02:26:48.742374 | orchestrator | Friday 20 February 2026 02:26:45 +0000 (0:00:01.030) 0:02:41.958 ******* 2026-02-20 02:26:48.742387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 02:26:48.742425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:26:48.742438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 02:26:48.742449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 02:26:48.742474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 02:26:48.742502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:26:48.742520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 02:26:48.742531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 02:26:48.742541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 02:26:48.742556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:26:48.742567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 02:26:48.742585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 02:26:49.703199 | orchestrator | 2026-02-20 02:26:49.703294 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-20 02:26:49.703309 | orchestrator | Friday 20 February 2026 02:26:48 +0000 (0:00:03.509) 0:02:45.468 ******* 2026-02-20 02:26:49.703322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-20 02:26:49.703336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:26:49.703347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 02:26:49.703376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 02:26:49.703393 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:26:49.703412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-20 02:26:49.703468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:26:49.703481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 02:26:49.703490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 02:26:49.703499 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:26:49.703508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-20 02:26:49.703522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:26:49.703532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 02:26:49.703556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 02:27:01.067487 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:01.067586 | orchestrator | 2026-02-20 02:27:01.067597 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-20 02:27:01.067606 | orchestrator | Friday 20 February 2026 02:26:49 +0000 (0:00:00.956) 0:02:46.425 ******* 2026-02-20 02:27:01.067615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-20 02:27:01.067626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-20 02:27:01.067634 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:01.067641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-20 02:27:01.067647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-20 02:27:01.067654 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:01.067661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-20 02:27:01.067668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-20 02:27:01.067674 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:01.067680 | orchestrator | 2026-02-20 02:27:01.067687 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-20 02:27:01.067693 | orchestrator | Friday 20 February 2026 02:26:50 +0000 (0:00:00.954) 0:02:47.379 ******* 2026-02-20 02:27:01.067700 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:27:01.067707 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:27:01.067713 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:27:01.067720 | orchestrator | 2026-02-20 02:27:01.067726 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-20 02:27:01.067733 | orchestrator | Friday 20 February 2026 02:26:52 +0000 (0:00:01.296) 0:02:48.676 ******* 2026-02-20 02:27:01.067740 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:27:01.067747 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:27:01.067753 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:27:01.067760 | orchestrator | 2026-02-20 02:27:01.067766 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-20 02:27:01.067772 | orchestrator | Friday 20 February 2026 02:26:54 +0000 (0:00:02.107) 0:02:50.784 ******* 2026-02-20 02:27:01.067802 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:27:01.067809 | orchestrator | 2026-02-20 02:27:01.067815 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-20 02:27:01.067888 | orchestrator | Friday 20 February 2026 02:26:55 +0000 (0:00:01.341) 0:02:52.125 ******* 2026-02-20 02:27:01.067900 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 02:27:01.067906 | orchestrator | 2026-02-20 02:27:01.067912 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-20 02:27:01.067918 | orchestrator | Friday 20 February 2026 02:26:58 +0000 (0:00:03.229) 0:02:55.354 ******* 2026-02-20 02:27:01.067946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 02:27:01.067956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-20 02:27:01.067963 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:01.067974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 02:27:01.067988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-20 02:27:01.067995 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:01.068008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 02:27:03.453160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-20 02:27:03.454212 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:03.454267 | orchestrator | 2026-02-20 02:27:03.454280 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-20 02:27:03.454292 | orchestrator | Friday 20 February 2026 02:27:01 +0000 (0:00:02.339) 0:02:57.694 ******* 2026-02-20 02:27:03.454322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 02:27:03.454340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-20 02:27:03.454362 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:03.454556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 02:27:03.454606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-20 02:27:03.454624 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:03.454642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 02:27:03.454676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-20 02:27:13.314545 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:13.314616 | orchestrator | 2026-02-20 02:27:13.314622 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-20 02:27:13.314627 | orchestrator | Friday 20 February 2026 02:27:03 +0000 (0:00:02.388) 0:03:00.082 ******* 2026-02-20 02:27:13.314633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-20 02:27:13.314650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-20 02:27:13.314655 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:13.314659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-20 02:27:13.314663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-20 02:27:13.314667 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:13.314671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-20 02:27:13.314675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-20 02:27:13.314693 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:13.314697 | orchestrator | 2026-02-20 02:27:13.314701 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-20 02:27:13.314705 | orchestrator | Friday 20 February 2026 02:27:06 +0000 (0:00:03.004) 0:03:03.086 ******* 2026-02-20 02:27:13.314709 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:27:13.314722 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:27:13.314726 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:27:13.314730 | orchestrator | 2026-02-20 02:27:13.314733 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-20 02:27:13.314737 | orchestrator | Friday 20 February 2026 02:27:08 +0000 (0:00:02.040) 0:03:05.127 ******* 2026-02-20 02:27:13.314741 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:13.314745 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:13.314748 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:13.314752 | orchestrator | 2026-02-20 02:27:13.314756 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-20 02:27:13.314759 | orchestrator | Friday 20 February 2026 02:27:09 +0000 (0:00:01.428) 0:03:06.556 ******* 2026-02-20 02:27:13.314763 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:13.314767 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:13.314770 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:13.314774 | orchestrator | 2026-02-20 02:27:13.314778 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-20 02:27:13.314782 | orchestrator | Friday 20 February 2026 02:27:10 +0000 (0:00:00.311) 0:03:06.868 ******* 2026-02-20 02:27:13.314785 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:27:13.314789 | orchestrator | 2026-02-20 02:27:13.314793 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-20 02:27:13.314799 | orchestrator | Friday 20 February 2026 02:27:11 +0000 (0:00:01.348) 0:03:08.216 ******* 2026-02-20 02:27:13.314804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-20 02:27:13.314811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-20 02:27:13.314815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-20 02:27:13.314861 | orchestrator | 2026-02-20 02:27:13.314869 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-20 02:27:13.314876 | orchestrator | Friday 20 February 2026 02:27:13 +0000 (0:00:01.544) 0:03:09.761 ******* 2026-02-20 02:27:13.314886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-20 02:27:21.685135 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:21.685282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-20 02:27:21.685315 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:21.685336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-20 02:27:21.685355 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:21.685374 | orchestrator | 2026-02-20 02:27:21.685394 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-20 02:27:21.685412 | orchestrator | Friday 20 February 2026 02:27:13 +0000 (0:00:00.388) 0:03:10.150 ******* 2026-02-20 02:27:21.685432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-20 02:27:21.685452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-20 02:27:21.685500 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:21.685519 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:21.685536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-20 02:27:21.685553 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:21.685570 | orchestrator | 2026-02-20 02:27:21.685634 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-20 02:27:21.685660 | orchestrator | Friday 20 February 2026 02:27:14 +0000 (0:00:00.855) 0:03:11.005 ******* 2026-02-20 02:27:21.685673 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:21.685684 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:21.685695 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:21.685707 | orchestrator | 2026-02-20 02:27:21.685718 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-20 02:27:21.685729 | orchestrator | Friday 20 February 2026 02:27:14 +0000 (0:00:00.435) 0:03:11.440 ******* 2026-02-20 02:27:21.685740 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:21.685751 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:21.685762 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:21.685773 | orchestrator | 2026-02-20 02:27:21.685784 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-20 02:27:21.685796 | orchestrator | Friday 20 February 2026 02:27:16 +0000 (0:00:01.320) 0:03:12.760 ******* 2026-02-20 02:27:21.685807 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:21.685843 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:21.685861 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:21.685873 | orchestrator | 2026-02-20 02:27:21.685884 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-20 02:27:21.685896 | orchestrator | Friday 20 February 2026 02:27:16 +0000 (0:00:00.335) 0:03:13.096 ******* 2026-02-20 02:27:21.685907 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:27:21.685918 | orchestrator | 2026-02-20 02:27:21.685930 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-20 02:27:21.685942 | orchestrator | Friday 20 February 2026 02:27:17 +0000 (0:00:01.479) 0:03:14.576 ******* 2026-02-20 02:27:21.685982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 02:27:21.686002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:21.686137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:21.686165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:21.686183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-20 02:27:21.686215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:21.797929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 02:27:21.798102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:21.798120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:21.798129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:21.798136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:21.798158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:21.798172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:21.798185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 02:27:21.798191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-20 02:27:21.798198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:21.798205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:21.798216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-20 02:27:21.987733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:21.987945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:21.987965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:21.987977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:21.987988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:21.988000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-20 02:27:21.988039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-20 02:27:21.988059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 02:27:21.988070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:21.988080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-20 02:27:21.988090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:21.988100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:21.988146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-20 02:27:22.287634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-20 02:27:22.287762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 02:27:22.287793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:22.287846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:22.287892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:22.287975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-20 02:27:22.288001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:22.288022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:22.288045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:22.288067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:22.288088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 02:27:22.288146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.341699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-20 02:27:23.341873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:23.341904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.341926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-20 02:27:23.341950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-20 02:27:23.342003 | orchestrator | 2026-02-20 02:27:23.342102 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-20 02:27:23.342118 | orchestrator | Friday 20 February 2026 02:27:22 +0000 (0:00:04.345) 0:03:18.922 ******* 2026-02-20 02:27:23.342170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 02:27:23.342185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.342198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.342211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.342225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-20 02:27:23.342260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.444095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:23.444175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:23.444187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 02:27:23.444196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.444223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.444256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 02:27:23.444265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.444273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.444279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.444286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-20 02:27:23.444298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-20 02:27:23.444308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:23.444320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.522967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.523051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:23.523065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-20 02:27:23.523096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:23.523118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-20 02:27:23.523126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.523135 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:23.523159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 02:27:23.523168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 02:27:23.523182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.523190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.523199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-20 02:27:23.523213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.785123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:23.785214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.785306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.785332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-20 02:27:23.785356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-20 02:27:23.785398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.785416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-20 02:27:23.785445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:23.785458 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:23.785469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:23.785479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.785494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 02:27:23.785504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:23.785519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-20 02:27:34.140295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-20 02:27:34.140445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-20 02:27:34.140464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-20 02:27:34.140494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-20 02:27:34.140508 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:34.140522 | orchestrator | 2026-02-20 02:27:34.140535 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-20 02:27:34.140547 | orchestrator | Friday 20 February 2026 02:27:23 +0000 (0:00:01.494) 0:03:20.416 ******* 2026-02-20 02:27:34.140560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-20 02:27:34.140572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-20 02:27:34.140584 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:34.140595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-20 02:27:34.140606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-20 02:27:34.140626 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:34.140691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-20 02:27:34.140731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-20 02:27:34.140749 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:34.140767 | orchestrator | 2026-02-20 02:27:34.140783 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-20 02:27:34.140801 | orchestrator | Friday 20 February 2026 02:27:25 +0000 (0:00:02.011) 0:03:22.428 ******* 2026-02-20 02:27:34.140847 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:27:34.140865 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:27:34.140882 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:27:34.140900 | orchestrator | 2026-02-20 02:27:34.140919 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-20 02:27:34.140936 | orchestrator | Friday 20 February 2026 02:27:27 +0000 (0:00:01.378) 0:03:23.806 ******* 2026-02-20 02:27:34.140955 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:27:34.140972 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:27:34.140990 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:27:34.141008 | orchestrator | 2026-02-20 02:27:34.141024 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-20 02:27:34.141042 | orchestrator | Friday 20 February 2026 02:27:29 +0000 (0:00:02.233) 0:03:26.040 ******* 2026-02-20 02:27:34.141059 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:27:34.141079 | orchestrator | 2026-02-20 02:27:34.141098 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-20 02:27:34.141118 | orchestrator | Friday 20 February 2026 02:27:30 +0000 (0:00:01.223) 0:03:27.263 ******* 2026-02-20 02:27:34.141138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 02:27:34.141172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 02:27:34.141194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 02:27:34.141226 | orchestrator | 2026-02-20 02:27:34.141245 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-20 02:27:34.141277 | orchestrator | Friday 20 February 2026 02:27:34 +0000 (0:00:03.507) 0:03:30.771 ******* 2026-02-20 02:27:44.913716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-20 02:27:44.913874 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:44.913893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-20 02:27:44.913920 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:44.913962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-20 02:27:44.913998 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:44.914010 | orchestrator | 2026-02-20 02:27:44.914086 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-20 02:27:44.914099 | orchestrator | Friday 20 February 2026 02:27:34 +0000 (0:00:00.577) 0:03:31.348 ******* 2026-02-20 02:27:44.914110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-20 02:27:44.914123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-20 02:27:44.914136 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:44.914147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-20 02:27:44.914158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-20 02:27:44.914170 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:44.914197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-20 02:27:44.914209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-20 02:27:44.914227 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:44.914248 | orchestrator | 2026-02-20 02:27:44.914276 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-20 02:27:44.914298 | orchestrator | Friday 20 February 2026 02:27:35 +0000 (0:00:00.780) 0:03:32.129 ******* 2026-02-20 02:27:44.914319 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:27:44.914340 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:27:44.914361 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:27:44.914383 | orchestrator | 2026-02-20 02:27:44.914403 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-20 02:27:44.914421 | orchestrator | Friday 20 February 2026 02:27:37 +0000 (0:00:01.873) 0:03:34.002 ******* 2026-02-20 02:27:44.914434 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:27:44.914447 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:27:44.914460 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:27:44.914473 | orchestrator | 2026-02-20 02:27:44.914485 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-20 02:27:44.914498 | orchestrator | Friday 20 February 2026 02:27:39 +0000 (0:00:01.816) 0:03:35.819 ******* 2026-02-20 02:27:44.914510 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:27:44.914522 | orchestrator | 2026-02-20 02:27:44.914534 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-20 02:27:44.914546 | orchestrator | Friday 20 February 2026 02:27:40 +0000 (0:00:01.536) 0:03:37.355 ******* 2026-02-20 02:27:44.914572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 02:27:44.914603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:27:44.914617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 02:27:44.914642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 02:27:46.197035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:27:46.197118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 02:27:46.197161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 02:27:46.197171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:27:46.197178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 02:27:46.197185 | orchestrator | 2026-02-20 02:27:46.197192 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-20 02:27:46.197200 | orchestrator | Friday 20 February 2026 02:27:44 +0000 (0:00:04.191) 0:03:41.546 ******* 2026-02-20 02:27:46.197220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-20 02:27:46.197245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:27:46.197252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 02:27:46.197258 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:46.197266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-20 02:27:46.197278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:27:57.052394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 02:27:57.052517 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:57.052549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-20 02:27:57.052563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 02:27:57.052573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 02:27:57.052596 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:57.052616 | orchestrator | 2026-02-20 02:27:57.052627 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-20 02:27:57.052638 | orchestrator | Friday 20 February 2026 02:27:46 +0000 (0:00:01.280) 0:03:42.827 ******* 2026-02-20 02:27:57.052648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-20 02:27:57.052660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-20 02:27:57.052671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-20 02:27:57.052696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-20 02:27:57.052714 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:27:57.052724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-20 02:27:57.052733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-20 02:27:57.052742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-20 02:27:57.052751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-20 02:27:57.052760 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:27:57.052774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-20 02:27:57.052783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-20 02:27:57.052792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-20 02:27:57.052820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-20 02:27:57.052830 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:27:57.052839 | orchestrator | 2026-02-20 02:27:57.052848 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-20 02:27:57.052857 | orchestrator | Friday 20 February 2026 02:27:47 +0000 (0:00:00.905) 0:03:43.733 ******* 2026-02-20 02:27:57.052866 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:27:57.052875 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:27:57.052884 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:27:57.052893 | orchestrator | 2026-02-20 02:27:57.052902 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-20 02:27:57.052911 | orchestrator | Friday 20 February 2026 02:27:48 +0000 (0:00:01.442) 0:03:45.176 ******* 2026-02-20 02:27:57.052920 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:27:57.052928 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:27:57.052937 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:27:57.052947 | orchestrator | 2026-02-20 02:27:57.052956 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-20 02:27:57.052965 | orchestrator | Friday 20 February 2026 02:27:50 +0000 (0:00:02.138) 0:03:47.314 ******* 2026-02-20 02:27:57.052974 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:27:57.052984 | orchestrator | 2026-02-20 02:27:57.052994 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-20 02:27:57.053004 | orchestrator | Friday 20 February 2026 02:27:52 +0000 (0:00:01.596) 0:03:48.911 ******* 2026-02-20 02:27:57.053014 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-20 02:27:57.053025 | orchestrator | 2026-02-20 02:27:57.053034 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-20 02:27:57.053053 | orchestrator | Friday 20 February 2026 02:27:53 +0000 (0:00:00.878) 0:03:49.790 ******* 2026-02-20 02:27:57.053065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-20 02:27:57.053083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-20 02:28:08.872148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-20 02:28:08.872287 | orchestrator | 2026-02-20 02:28:08.872317 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-20 02:28:08.872339 | orchestrator | Friday 20 February 2026 02:27:57 +0000 (0:00:03.894) 0:03:53.684 ******* 2026-02-20 02:28:08.872382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 02:28:08.872407 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:28:08.872427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 02:28:08.872447 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:28:08.872466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 02:28:08.872486 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:28:08.872506 | orchestrator | 2026-02-20 02:28:08.872527 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-20 02:28:08.872545 | orchestrator | Friday 20 February 2026 02:27:58 +0000 (0:00:01.372) 0:03:55.057 ******* 2026-02-20 02:28:08.872598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-20 02:28:08.872621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-20 02:28:08.872641 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:28:08.872661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-20 02:28:08.872676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-20 02:28:08.872689 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:28:08.872702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-20 02:28:08.872716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-20 02:28:08.872748 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:28:08.872762 | orchestrator | 2026-02-20 02:28:08.872775 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-20 02:28:08.872788 | orchestrator | Friday 20 February 2026 02:27:59 +0000 (0:00:01.576) 0:03:56.634 ******* 2026-02-20 02:28:08.872840 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:28:08.872859 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:28:08.872878 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:28:08.872896 | orchestrator | 2026-02-20 02:28:08.872914 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-20 02:28:08.872933 | orchestrator | Friday 20 February 2026 02:28:02 +0000 (0:00:02.490) 0:03:59.125 ******* 2026-02-20 02:28:08.872953 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:28:08.872972 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:28:08.872990 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:28:08.873003 | orchestrator | 2026-02-20 02:28:08.873016 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-20 02:28:08.873026 | orchestrator | Friday 20 February 2026 02:28:05 +0000 (0:00:02.979) 0:04:02.104 ******* 2026-02-20 02:28:08.873039 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-20 02:28:08.873050 | orchestrator | 2026-02-20 02:28:08.873061 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-20 02:28:08.873080 | orchestrator | Friday 20 February 2026 02:28:06 +0000 (0:00:01.126) 0:04:03.230 ******* 2026-02-20 02:28:08.873093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 02:28:08.873106 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:28:08.873127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 02:28:08.873138 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:28:08.873150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 02:28:08.873161 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:28:08.873171 | orchestrator | 2026-02-20 02:28:08.873182 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-20 02:28:08.873194 | orchestrator | Friday 20 February 2026 02:28:07 +0000 (0:00:00.996) 0:04:04.227 ******* 2026-02-20 02:28:08.873205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 02:28:08.873216 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:28:08.873227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 02:28:08.873247 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:28:32.111548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 02:28:32.111706 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:28:32.111734 | orchestrator | 2026-02-20 02:28:32.111756 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-20 02:28:32.111837 | orchestrator | Friday 20 February 2026 02:28:08 +0000 (0:00:01.274) 0:04:05.501 ******* 2026-02-20 02:28:32.111863 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:28:32.111883 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:28:32.111901 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:28:32.111920 | orchestrator | 2026-02-20 02:28:32.111939 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-20 02:28:32.111980 | orchestrator | Friday 20 February 2026 02:28:10 +0000 (0:00:01.518) 0:04:07.020 ******* 2026-02-20 02:28:32.112034 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:28:32.112055 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:28:32.112074 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:28:32.112094 | orchestrator | 2026-02-20 02:28:32.112113 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-20 02:28:32.112132 | orchestrator | Friday 20 February 2026 02:28:13 +0000 (0:00:02.727) 0:04:09.748 ******* 2026-02-20 02:28:32.112151 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:28:32.112170 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:28:32.112189 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:28:32.112208 | orchestrator | 2026-02-20 02:28:32.112228 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-20 02:28:32.112246 | orchestrator | Friday 20 February 2026 02:28:15 +0000 (0:00:02.764) 0:04:12.512 ******* 2026-02-20 02:28:32.112267 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-20 02:28:32.112290 | orchestrator | 2026-02-20 02:28:32.112309 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-20 02:28:32.112328 | orchestrator | Friday 20 February 2026 02:28:17 +0000 (0:00:01.196) 0:04:13.709 ******* 2026-02-20 02:28:32.112347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-20 02:28:32.112367 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:28:32.112387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-20 02:28:32.112408 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:28:32.112427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-20 02:28:32.112446 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:28:32.112465 | orchestrator | 2026-02-20 02:28:32.112485 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-20 02:28:32.112505 | orchestrator | Friday 20 February 2026 02:28:18 +0000 (0:00:01.266) 0:04:14.975 ******* 2026-02-20 02:28:32.112550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-20 02:28:32.112584 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:28:32.112604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-20 02:28:32.112625 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:28:32.112653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-20 02:28:32.112672 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:28:32.112691 | orchestrator | 2026-02-20 02:28:32.112710 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-20 02:28:32.112729 | orchestrator | Friday 20 February 2026 02:28:19 +0000 (0:00:01.311) 0:04:16.287 ******* 2026-02-20 02:28:32.112747 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:28:32.112765 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:28:32.112812 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:28:32.112833 | orchestrator | 2026-02-20 02:28:32.112852 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-20 02:28:32.112871 | orchestrator | Friday 20 February 2026 02:28:21 +0000 (0:00:01.871) 0:04:18.159 ******* 2026-02-20 02:28:32.112889 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:28:32.112907 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:28:32.112927 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:28:32.112946 | orchestrator | 2026-02-20 02:28:32.112965 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-20 02:28:32.112983 | orchestrator | Friday 20 February 2026 02:28:23 +0000 (0:00:02.335) 0:04:20.494 ******* 2026-02-20 02:28:32.113001 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:28:32.113019 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:28:32.113038 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:28:32.113057 | orchestrator | 2026-02-20 02:28:32.113076 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-20 02:28:32.113094 | orchestrator | Friday 20 February 2026 02:28:27 +0000 (0:00:03.190) 0:04:23.685 ******* 2026-02-20 02:28:32.113112 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:28:32.113130 | orchestrator | 2026-02-20 02:28:32.113150 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-20 02:28:32.113169 | orchestrator | Friday 20 February 2026 02:28:28 +0000 (0:00:01.687) 0:04:25.372 ******* 2026-02-20 02:28:32.113189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 02:28:32.113221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 02:28:32.113256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 02:28:32.862250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 02:28:32.862343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 02:28:32.862354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 02:28:32.862363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:28:32.862385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 02:28:32.862393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 02:28:32.862414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:28:32.862421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 02:28:32.862428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 02:28:32.862434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 02:28:32.862444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 02:28:32.862474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:28:32.862481 | orchestrator | 2026-02-20 02:28:32.862489 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-20 02:28:32.862496 | orchestrator | Friday 20 February 2026 02:28:32 +0000 (0:00:03.528) 0:04:28.901 ******* 2026-02-20 02:28:32.862514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-20 02:28:33.025354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 02:28:33.025448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 02:28:33.025465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 02:28:33.025500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:28:33.025513 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:28:33.025526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-20 02:28:33.025551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 02:28:33.025575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 02:28:33.025590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 02:28:33.025600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:28:33.025617 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:28:33.025627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-20 02:28:33.025636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 02:28:33.025651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 02:28:33.025668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 02:28:44.717917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 02:28:44.718079 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:28:44.718094 | orchestrator | 2026-02-20 02:28:44.718102 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-20 02:28:44.718110 | orchestrator | Friday 20 February 2026 02:28:33 +0000 (0:00:00.759) 0:04:29.661 ******* 2026-02-20 02:28:44.718118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-20 02:28:44.718127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-20 02:28:44.718136 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:28:44.718142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-20 02:28:44.718149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-20 02:28:44.718156 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:28:44.718162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-20 02:28:44.718169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-20 02:28:44.718176 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:28:44.718182 | orchestrator | 2026-02-20 02:28:44.718189 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-20 02:28:44.718196 | orchestrator | Friday 20 February 2026 02:28:34 +0000 (0:00:00.991) 0:04:30.652 ******* 2026-02-20 02:28:44.718202 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:28:44.718209 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:28:44.718216 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:28:44.718222 | orchestrator | 2026-02-20 02:28:44.718229 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-20 02:28:44.718236 | orchestrator | Friday 20 February 2026 02:28:35 +0000 (0:00:01.795) 0:04:32.448 ******* 2026-02-20 02:28:44.718242 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:28:44.718249 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:28:44.718256 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:28:44.718262 | orchestrator | 2026-02-20 02:28:44.718269 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-20 02:28:44.718275 | orchestrator | Friday 20 February 2026 02:28:37 +0000 (0:00:02.138) 0:04:34.586 ******* 2026-02-20 02:28:44.718282 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:28:44.718289 | orchestrator | 2026-02-20 02:28:44.718295 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-20 02:28:44.718302 | orchestrator | Friday 20 February 2026 02:28:39 +0000 (0:00:01.389) 0:04:35.976 ******* 2026-02-20 02:28:44.718322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-20 02:28:44.718353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-20 02:28:44.718361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-20 02:28:44.718369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-20 02:28:44.718381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-20 02:28:44.718400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-20 02:28:46.692988 | orchestrator | 2026-02-20 02:28:46.693083 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-20 02:28:46.693097 | orchestrator | Friday 20 February 2026 02:28:44 +0000 (0:00:05.366) 0:04:41.342 ******* 2026-02-20 02:28:46.693110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-20 02:28:46.693125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-20 02:28:46.693136 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:28:46.693164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-20 02:28:46.693199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-20 02:28:46.693238 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:28:46.693250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-20 02:28:46.693287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-20 02:28:46.693299 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:28:46.693309 | orchestrator | 2026-02-20 02:28:46.693319 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-20 02:28:46.693329 | orchestrator | Friday 20 February 2026 02:28:45 +0000 (0:00:01.052) 0:04:42.395 ******* 2026-02-20 02:28:46.693340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-20 02:28:46.693352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-20 02:28:46.693378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-20 02:28:46.693390 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:28:46.693400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-20 02:28:46.693410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-20 02:28:46.693420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-20 02:28:46.693430 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:28:46.693439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-20 02:28:46.693449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-20 02:28:46.693471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-20 02:28:52.696411 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:28:52.696518 | orchestrator | 2026-02-20 02:28:52.696537 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-20 02:28:52.696552 | orchestrator | Friday 20 February 2026 02:28:46 +0000 (0:00:00.930) 0:04:43.326 ******* 2026-02-20 02:28:52.696565 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:28:52.696578 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:28:52.696591 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:28:52.696603 | orchestrator | 2026-02-20 02:28:52.696617 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-20 02:28:52.696631 | orchestrator | Friday 20 February 2026 02:28:47 +0000 (0:00:00.425) 0:04:43.751 ******* 2026-02-20 02:28:52.696645 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:28:52.696660 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:28:52.696673 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:28:52.696687 | orchestrator | 2026-02-20 02:28:52.696701 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-20 02:28:52.696716 | orchestrator | Friday 20 February 2026 02:28:48 +0000 (0:00:01.433) 0:04:45.185 ******* 2026-02-20 02:28:52.696730 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:28:52.696746 | orchestrator | 2026-02-20 02:28:52.696755 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-20 02:28:52.696763 | orchestrator | Friday 20 February 2026 02:28:50 +0000 (0:00:01.728) 0:04:46.914 ******* 2026-02-20 02:28:52.696817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-20 02:28:52.696854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 02:28:52.696878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:52.696887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:52.696897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 02:28:52.696924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-20 02:28:52.696933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 02:28:52.696947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:52.696956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:52.696968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 02:28:52.696976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-20 02:28:52.696985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 02:28:52.697000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:54.290391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:54.290519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 02:28:54.290554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-20 02:28:54.290572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-20 02:28:54.290585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-20 02:28:54.290618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-20 02:28:54.290639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:54.290652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:54.290670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:54.290683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:54.290695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 02:28:54.290707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 02:28:54.290729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-20 02:28:55.028820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-20 02:28:55.028951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:55.028964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:55.028971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 02:28:55.028977 | orchestrator | 2026-02-20 02:28:55.028984 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-20 02:28:55.028991 | orchestrator | Friday 20 February 2026 02:28:54 +0000 (0:00:04.182) 0:04:51.096 ******* 2026-02-20 02:28:55.028998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-20 02:28:55.029024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 02:28:55.029044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:55.029050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:55.029060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 02:28:55.029069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-20 02:28:55.029076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-20 02:28:55.029091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-20 02:28:55.237005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 02:28:55.237076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:55.237095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:55.237101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:55.237106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:55.237111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 02:28:55.237131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 02:28:55.237136 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:28:55.237152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-20 02:28:55.237160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-20 02:28:55.237165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:55.237181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:55.237185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 02:28:55.237239 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:28:55.237245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-20 02:28:55.237255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 02:28:56.805761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:56.805927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:56.805942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 02:28:56.805954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-20 02:28:56.805983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-20 02:28:56.805993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:56.806060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 02:28:56.806076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 02:28:56.806085 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:28:56.806095 | orchestrator | 2026-02-20 02:28:56.806104 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-20 02:28:56.806113 | orchestrator | Friday 20 February 2026 02:28:55 +0000 (0:00:00.921) 0:04:52.017 ******* 2026-02-20 02:28:56.806121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-20 02:28:56.806131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-20 02:28:56.806142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-20 02:28:56.806159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-20 02:28:56.806169 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:28:56.806177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-20 02:28:56.806185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-20 02:28:56.806193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-20 02:28:56.806201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-20 02:28:56.806209 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:28:56.806217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-20 02:28:56.806225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-20 02:28:56.806234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-20 02:28:56.806248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-20 02:29:04.547811 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:29:04.547939 | orchestrator | 2026-02-20 02:29:04.547960 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-20 02:29:04.547972 | orchestrator | Friday 20 February 2026 02:28:56 +0000 (0:00:01.410) 0:04:53.428 ******* 2026-02-20 02:29:04.547979 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:29:04.547987 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:29:04.547995 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:29:04.548002 | orchestrator | 2026-02-20 02:29:04.548010 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-20 02:29:04.548018 | orchestrator | Friday 20 February 2026 02:28:57 +0000 (0:00:00.437) 0:04:53.865 ******* 2026-02-20 02:29:04.548025 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:29:04.548032 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:29:04.548040 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:29:04.548047 | orchestrator | 2026-02-20 02:29:04.548054 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-20 02:29:04.548062 | orchestrator | Friday 20 February 2026 02:28:58 +0000 (0:00:01.337) 0:04:55.203 ******* 2026-02-20 02:29:04.548069 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:29:04.548098 | orchestrator | 2026-02-20 02:29:04.548106 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-20 02:29:04.548113 | orchestrator | Friday 20 February 2026 02:29:00 +0000 (0:00:01.799) 0:04:57.002 ******* 2026-02-20 02:29:04.548125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 02:29:04.548139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 02:29:04.548182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 02:29:04.548191 | orchestrator | 2026-02-20 02:29:04.548199 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-20 02:29:04.548221 | orchestrator | Friday 20 February 2026 02:29:02 +0000 (0:00:02.182) 0:04:59.184 ******* 2026-02-20 02:29:04.548234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-20 02:29:04.548250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-20 02:29:04.548258 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:29:04.548265 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:29:04.548273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-20 02:29:04.548283 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:29:04.548295 | orchestrator | 2026-02-20 02:29:04.548307 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-20 02:29:04.548320 | orchestrator | Friday 20 February 2026 02:29:02 +0000 (0:00:00.451) 0:04:59.636 ******* 2026-02-20 02:29:04.548334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-20 02:29:04.548347 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:29:04.548359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-20 02:29:04.548371 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:29:04.548382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-20 02:29:04.548394 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:29:04.548407 | orchestrator | 2026-02-20 02:29:04.548421 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-20 02:29:04.548434 | orchestrator | Friday 20 February 2026 02:29:03 +0000 (0:00:00.946) 0:05:00.582 ******* 2026-02-20 02:29:04.548464 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:29:14.620855 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:29:14.620987 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:29:14.621013 | orchestrator | 2026-02-20 02:29:14.621032 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-20 02:29:14.621051 | orchestrator | Friday 20 February 2026 02:29:04 +0000 (0:00:00.605) 0:05:01.188 ******* 2026-02-20 02:29:14.621068 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:29:14.621085 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:29:14.621101 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:29:14.621116 | orchestrator | 2026-02-20 02:29:14.621134 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-20 02:29:14.621152 | orchestrator | Friday 20 February 2026 02:29:05 +0000 (0:00:01.389) 0:05:02.577 ******* 2026-02-20 02:29:14.621187 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:29:14.621204 | orchestrator | 2026-02-20 02:29:14.621220 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-20 02:29:14.621234 | orchestrator | Friday 20 February 2026 02:29:07 +0000 (0:00:01.478) 0:05:04.055 ******* 2026-02-20 02:29:14.621249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 02:29:14.621266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 02:29:14.621277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 02:29:14.621332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 02:29:14.621352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 02:29:14.621365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 02:29:14.621376 | orchestrator | 2026-02-20 02:29:14.621409 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-20 02:29:14.621433 | orchestrator | Friday 20 February 2026 02:29:13 +0000 (0:00:06.555) 0:05:10.611 ******* 2026-02-20 02:29:14.621445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-20 02:29:14.621482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-20 02:29:20.500187 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:29:20.500322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-20 02:29:20.500342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-20 02:29:20.500354 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:29:20.500364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-20 02:29:20.500396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-20 02:29:20.500406 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:29:20.500415 | orchestrator | 2026-02-20 02:29:20.500425 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-20 02:29:20.500435 | orchestrator | Friday 20 February 2026 02:29:14 +0000 (0:00:00.646) 0:05:11.257 ******* 2026-02-20 02:29:20.500461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-20 02:29:20.500478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-20 02:29:20.500488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-20 02:29:20.500497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-20 02:29:20.500506 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:29:20.500515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-20 02:29:20.500524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-20 02:29:20.500533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-20 02:29:20.500542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-20 02:29:20.500551 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:29:20.500559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-20 02:29:20.500568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-20 02:29:20.500577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-20 02:29:20.500593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-20 02:29:20.500602 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:29:20.500611 | orchestrator | 2026-02-20 02:29:20.500620 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-20 02:29:20.500628 | orchestrator | Friday 20 February 2026 02:29:15 +0000 (0:00:00.974) 0:05:12.232 ******* 2026-02-20 02:29:20.500637 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:29:20.500646 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:29:20.500654 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:29:20.500663 | orchestrator | 2026-02-20 02:29:20.500672 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-20 02:29:20.500680 | orchestrator | Friday 20 February 2026 02:29:16 +0000 (0:00:01.326) 0:05:13.558 ******* 2026-02-20 02:29:20.500689 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:29:20.500698 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:29:20.500707 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:29:20.500715 | orchestrator | 2026-02-20 02:29:20.500724 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-20 02:29:20.500734 | orchestrator | Friday 20 February 2026 02:29:19 +0000 (0:00:02.251) 0:05:15.810 ******* 2026-02-20 02:29:20.500744 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:29:20.500880 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:29:20.500900 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:29:20.500917 | orchestrator | 2026-02-20 02:29:20.500935 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-20 02:29:20.500951 | orchestrator | Friday 20 February 2026 02:29:19 +0000 (0:00:00.648) 0:05:16.459 ******* 2026-02-20 02:29:20.500963 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:29:20.500973 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:29:20.500983 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:29:20.500993 | orchestrator | 2026-02-20 02:29:20.501003 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-20 02:29:20.501013 | orchestrator | Friday 20 February 2026 02:29:20 +0000 (0:00:00.320) 0:05:16.780 ******* 2026-02-20 02:29:20.501022 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:29:20.501040 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:30:03.806133 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:30:03.806252 | orchestrator | 2026-02-20 02:30:03.806269 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-20 02:30:03.806283 | orchestrator | Friday 20 February 2026 02:29:20 +0000 (0:00:00.358) 0:05:17.139 ******* 2026-02-20 02:30:03.806295 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:30:03.806306 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:30:03.806335 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:30:03.806347 | orchestrator | 2026-02-20 02:30:03.806357 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-20 02:30:03.806368 | orchestrator | Friday 20 February 2026 02:29:20 +0000 (0:00:00.317) 0:05:17.456 ******* 2026-02-20 02:30:03.806376 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:30:03.806382 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:30:03.806389 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:30:03.806396 | orchestrator | 2026-02-20 02:30:03.806403 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-20 02:30:03.806409 | orchestrator | Friday 20 February 2026 02:29:21 +0000 (0:00:00.645) 0:05:18.102 ******* 2026-02-20 02:30:03.806415 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:30:03.806421 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:30:03.806428 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:30:03.806452 | orchestrator | 2026-02-20 02:30:03.806459 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-20 02:30:03.806465 | orchestrator | Friday 20 February 2026 02:29:22 +0000 (0:00:00.564) 0:05:18.666 ******* 2026-02-20 02:30:03.806471 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:30:03.806479 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:30:03.806485 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:30:03.806491 | orchestrator | 2026-02-20 02:30:03.806497 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-20 02:30:03.806504 | orchestrator | Friday 20 February 2026 02:29:22 +0000 (0:00:00.679) 0:05:19.345 ******* 2026-02-20 02:30:03.806510 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:30:03.806516 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:30:03.806522 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:30:03.806528 | orchestrator | 2026-02-20 02:30:03.806534 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-20 02:30:03.806540 | orchestrator | Friday 20 February 2026 02:29:23 +0000 (0:00:00.651) 0:05:19.997 ******* 2026-02-20 02:30:03.806546 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:30:03.806553 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:30:03.806559 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:30:03.806565 | orchestrator | 2026-02-20 02:30:03.806571 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-20 02:30:03.806577 | orchestrator | Friday 20 February 2026 02:29:24 +0000 (0:00:00.921) 0:05:20.919 ******* 2026-02-20 02:30:03.806583 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:30:03.806589 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:30:03.806595 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:30:03.806601 | orchestrator | 2026-02-20 02:30:03.806608 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-20 02:30:03.806615 | orchestrator | Friday 20 February 2026 02:29:25 +0000 (0:00:00.877) 0:05:21.796 ******* 2026-02-20 02:30:03.806622 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:30:03.806629 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:30:03.806636 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:30:03.806643 | orchestrator | 2026-02-20 02:30:03.806650 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-20 02:30:03.806657 | orchestrator | Friday 20 February 2026 02:29:26 +0000 (0:00:00.907) 0:05:22.704 ******* 2026-02-20 02:30:03.806664 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:30:03.806672 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:30:03.806680 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:30:03.806687 | orchestrator | 2026-02-20 02:30:03.806694 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-20 02:30:03.806701 | orchestrator | Friday 20 February 2026 02:29:35 +0000 (0:00:09.472) 0:05:32.177 ******* 2026-02-20 02:30:03.806708 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:30:03.806715 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:30:03.806722 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:30:03.806729 | orchestrator | 2026-02-20 02:30:03.806810 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-20 02:30:03.806818 | orchestrator | Friday 20 February 2026 02:29:36 +0000 (0:00:01.166) 0:05:33.344 ******* 2026-02-20 02:30:03.806825 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:30:03.806833 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:30:03.806840 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:30:03.806847 | orchestrator | 2026-02-20 02:30:03.806855 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-20 02:30:03.806862 | orchestrator | Friday 20 February 2026 02:29:46 +0000 (0:00:09.844) 0:05:43.188 ******* 2026-02-20 02:30:03.806869 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:30:03.806876 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:30:03.806884 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:30:03.806891 | orchestrator | 2026-02-20 02:30:03.806898 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-20 02:30:03.806906 | orchestrator | Friday 20 February 2026 02:29:51 +0000 (0:00:04.640) 0:05:47.828 ******* 2026-02-20 02:30:03.806925 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:30:03.806935 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:30:03.806946 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:30:03.806956 | orchestrator | 2026-02-20 02:30:03.806966 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-20 02:30:03.806976 | orchestrator | Friday 20 February 2026 02:29:59 +0000 (0:00:07.989) 0:05:55.818 ******* 2026-02-20 02:30:03.806986 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:30:03.806996 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:30:03.807007 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:30:03.807017 | orchestrator | 2026-02-20 02:30:03.807028 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-20 02:30:03.807038 | orchestrator | Friday 20 February 2026 02:29:59 +0000 (0:00:00.551) 0:05:56.369 ******* 2026-02-20 02:30:03.807049 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:30:03.807060 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:30:03.807069 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:30:03.807079 | orchestrator | 2026-02-20 02:30:03.807110 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-20 02:30:03.807122 | orchestrator | Friday 20 February 2026 02:30:00 +0000 (0:00:00.314) 0:05:56.684 ******* 2026-02-20 02:30:03.807133 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:30:03.807142 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:30:03.807153 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:30:03.807163 | orchestrator | 2026-02-20 02:30:03.807182 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-20 02:30:03.807193 | orchestrator | Friday 20 February 2026 02:30:00 +0000 (0:00:00.302) 0:05:56.986 ******* 2026-02-20 02:30:03.807204 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:30:03.807214 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:30:03.807225 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:30:03.807235 | orchestrator | 2026-02-20 02:30:03.807246 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-20 02:30:03.807257 | orchestrator | Friday 20 February 2026 02:30:00 +0000 (0:00:00.339) 0:05:57.326 ******* 2026-02-20 02:30:03.807266 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:30:03.807277 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:30:03.807287 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:30:03.807297 | orchestrator | 2026-02-20 02:30:03.807307 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-20 02:30:03.807317 | orchestrator | Friday 20 February 2026 02:30:01 +0000 (0:00:00.564) 0:05:57.890 ******* 2026-02-20 02:30:03.807327 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:30:03.807338 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:30:03.807348 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:30:03.807358 | orchestrator | 2026-02-20 02:30:03.807369 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-20 02:30:03.807379 | orchestrator | Friday 20 February 2026 02:30:01 +0000 (0:00:00.333) 0:05:58.223 ******* 2026-02-20 02:30:03.807390 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:30:03.807400 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:30:03.807410 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:30:03.807420 | orchestrator | 2026-02-20 02:30:03.807430 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-20 02:30:03.807440 | orchestrator | Friday 20 February 2026 02:30:02 +0000 (0:00:00.848) 0:05:59.072 ******* 2026-02-20 02:30:03.807450 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:30:03.807460 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:30:03.807470 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:30:03.807481 | orchestrator | 2026-02-20 02:30:03.807492 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:30:03.807504 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-20 02:30:03.807523 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-20 02:30:03.807534 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-20 02:30:03.807544 | orchestrator | 2026-02-20 02:30:03.807554 | orchestrator | 2026-02-20 02:30:03.807565 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:30:03.807576 | orchestrator | Friday 20 February 2026 02:30:03 +0000 (0:00:00.796) 0:05:59.869 ******* 2026-02-20 02:30:03.807586 | orchestrator | =============================================================================== 2026-02-20 02:30:03.807596 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.84s 2026-02-20 02:30:03.807606 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.47s 2026-02-20 02:30:03.807616 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.99s 2026-02-20 02:30:03.807626 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.56s 2026-02-20 02:30:03.807637 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.37s 2026-02-20 02:30:03.807647 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.64s 2026-02-20 02:30:03.807658 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.35s 2026-02-20 02:30:03.807668 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.31s 2026-02-20 02:30:03.807678 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.19s 2026-02-20 02:30:03.807688 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.18s 2026-02-20 02:30:03.807698 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.89s 2026-02-20 02:30:03.807709 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.85s 2026-02-20 02:30:03.807719 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.84s 2026-02-20 02:30:03.807728 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.79s 2026-02-20 02:30:03.807758 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.72s 2026-02-20 02:30:03.807768 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.64s 2026-02-20 02:30:03.807777 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.58s 2026-02-20 02:30:03.807788 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.53s 2026-02-20 02:30:03.807798 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.51s 2026-02-20 02:30:03.807809 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.51s 2026-02-20 02:30:05.824939 | orchestrator | 2026-02-20 02:30:05 | INFO  | Task 47602c5a-68aa-4aae-8359-e2cadff7d646 (opensearch) was prepared for execution. 2026-02-20 02:30:05.825036 | orchestrator | 2026-02-20 02:30:05 | INFO  | It takes a moment until task 47602c5a-68aa-4aae-8359-e2cadff7d646 (opensearch) has been started and output is visible here. 2026-02-20 02:30:15.713514 | orchestrator | 2026-02-20 02:30:15.713652 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 02:30:15.713680 | orchestrator | 2026-02-20 02:30:15.713715 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 02:30:15.713786 | orchestrator | Friday 20 February 2026 02:30:09 +0000 (0:00:00.229) 0:00:00.229 ******* 2026-02-20 02:30:15.713808 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:30:15.713828 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:30:15.713845 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:30:15.713862 | orchestrator | 2026-02-20 02:30:15.713881 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 02:30:15.713931 | orchestrator | Friday 20 February 2026 02:30:09 +0000 (0:00:00.258) 0:00:00.487 ******* 2026-02-20 02:30:15.713953 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-20 02:30:15.713973 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-20 02:30:15.713992 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-20 02:30:15.714011 | orchestrator | 2026-02-20 02:30:15.714151 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-20 02:30:15.714170 | orchestrator | 2026-02-20 02:30:15.714183 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-20 02:30:15.714195 | orchestrator | Friday 20 February 2026 02:30:10 +0000 (0:00:00.360) 0:00:00.848 ******* 2026-02-20 02:30:15.714208 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:30:15.714221 | orchestrator | 2026-02-20 02:30:15.714232 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-20 02:30:15.714246 | orchestrator | Friday 20 February 2026 02:30:10 +0000 (0:00:00.448) 0:00:01.297 ******* 2026-02-20 02:30:15.714259 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-20 02:30:15.714269 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-20 02:30:15.714280 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-20 02:30:15.714291 | orchestrator | 2026-02-20 02:30:15.714301 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-20 02:30:15.714312 | orchestrator | Friday 20 February 2026 02:30:11 +0000 (0:00:00.637) 0:00:01.934 ******* 2026-02-20 02:30:15.714327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-20 02:30:15.714342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-20 02:30:15.714387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-20 02:30:15.714415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-20 02:30:15.714430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-20 02:30:15.714443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-20 02:30:15.714456 | orchestrator | 2026-02-20 02:30:15.714467 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-20 02:30:15.714478 | orchestrator | Friday 20 February 2026 02:30:12 +0000 (0:00:01.558) 0:00:03.492 ******* 2026-02-20 02:30:15.714489 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:30:15.714500 | orchestrator | 2026-02-20 02:30:15.714517 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-20 02:30:15.714528 | orchestrator | Friday 20 February 2026 02:30:13 +0000 (0:00:00.484) 0:00:03.977 ******* 2026-02-20 02:30:15.714554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-20 02:30:16.367692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-20 02:30:16.367837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-20 02:30:16.367852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-20 02:30:16.367884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-20 02:30:16.367943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-20 02:30:16.367958 | orchestrator | 2026-02-20 02:30:16.367972 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-20 02:30:16.367981 | orchestrator | Friday 20 February 2026 02:30:15 +0000 (0:00:02.366) 0:00:06.343 ******* 2026-02-20 02:30:16.367989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-20 02:30:16.367997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-20 02:30:16.368010 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:30:16.368023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-20 02:30:16.368037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-20 02:30:17.253963 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:30:17.254309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-20 02:30:17.254351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-20 02:30:17.254401 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:30:17.254420 | orchestrator | 2026-02-20 02:30:17.254437 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-20 02:30:17.254455 | orchestrator | Friday 20 February 2026 02:30:16 +0000 (0:00:00.654) 0:00:06.997 ******* 2026-02-20 02:30:17.254491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-20 02:30:17.254512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-20 02:30:17.254555 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:30:17.254574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-20 02:30:17.254594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-20 02:30:17.254626 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:30:17.254646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-20 02:30:17.254672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-20 02:30:17.254690 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:30:17.254706 | orchestrator | 2026-02-20 02:30:17.254716 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-20 02:30:17.254766 | orchestrator | Friday 20 February 2026 02:30:17 +0000 (0:00:00.877) 0:00:07.875 ******* 2026-02-20 02:30:25.057690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-20 02:30:25.057865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-20 02:30:25.057901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-20 02:30:25.057926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-20 02:30:25.057954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-20 02:30:25.057969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-20 02:30:25.057991 | orchestrator | 2026-02-20 02:30:25.058006 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-20 02:30:25.058073 | orchestrator | Friday 20 February 2026 02:30:19 +0000 (0:00:02.319) 0:00:10.195 ******* 2026-02-20 02:30:25.058082 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:30:25.058090 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:30:25.058098 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:30:25.058105 | orchestrator | 2026-02-20 02:30:25.058112 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-20 02:30:25.058120 | orchestrator | Friday 20 February 2026 02:30:21 +0000 (0:00:02.202) 0:00:12.397 ******* 2026-02-20 02:30:25.058127 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:30:25.058134 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:30:25.058142 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:30:25.058149 | orchestrator | 2026-02-20 02:30:25.058156 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-20 02:30:25.058163 | orchestrator | Friday 20 February 2026 02:30:23 +0000 (0:00:01.665) 0:00:14.063 ******* 2026-02-20 02:30:25.058178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-20 02:30:25.058190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-20 02:30:25.058205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-20 02:32:50.739449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-20 02:32:50.739589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-20 02:32:50.739610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-20 02:32:50.739625 | orchestrator | 2026-02-20 02:32:50.739639 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-20 02:32:50.739652 | orchestrator | Friday 20 February 2026 02:30:25 +0000 (0:00:01.624) 0:00:15.687 ******* 2026-02-20 02:32:50.739663 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:32:50.739676 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:32:50.739739 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:32:50.739752 | orchestrator | 2026-02-20 02:32:50.739769 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-20 02:32:50.739820 | orchestrator | Friday 20 February 2026 02:30:25 +0000 (0:00:00.256) 0:00:15.944 ******* 2026-02-20 02:32:50.739840 | orchestrator | 2026-02-20 02:32:50.739862 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-20 02:32:50.739874 | orchestrator | Friday 20 February 2026 02:30:25 +0000 (0:00:00.058) 0:00:16.003 ******* 2026-02-20 02:32:50.739885 | orchestrator | 2026-02-20 02:32:50.739896 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-20 02:32:50.739907 | orchestrator | Friday 20 February 2026 02:30:25 +0000 (0:00:00.062) 0:00:16.065 ******* 2026-02-20 02:32:50.739918 | orchestrator | 2026-02-20 02:32:50.739929 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-20 02:32:50.739958 | orchestrator | Friday 20 February 2026 02:30:25 +0000 (0:00:00.059) 0:00:16.124 ******* 2026-02-20 02:32:50.739970 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:32:50.739981 | orchestrator | 2026-02-20 02:32:50.739992 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-20 02:32:50.740003 | orchestrator | Friday 20 February 2026 02:30:25 +0000 (0:00:00.204) 0:00:16.329 ******* 2026-02-20 02:32:50.740014 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:32:50.740025 | orchestrator | 2026-02-20 02:32:50.740035 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-20 02:32:50.740046 | orchestrator | Friday 20 February 2026 02:30:26 +0000 (0:00:00.480) 0:00:16.810 ******* 2026-02-20 02:32:50.740057 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:32:50.740068 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:32:50.740084 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:32:50.740103 | orchestrator | 2026-02-20 02:32:50.740121 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-20 02:32:50.740139 | orchestrator | Friday 20 February 2026 02:31:16 +0000 (0:00:50.603) 0:01:07.413 ******* 2026-02-20 02:32:50.740157 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:32:50.740174 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:32:50.740193 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:32:50.740210 | orchestrator | 2026-02-20 02:32:50.740221 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-20 02:32:50.740232 | orchestrator | Friday 20 February 2026 02:32:39 +0000 (0:01:23.008) 0:02:30.422 ******* 2026-02-20 02:32:50.740244 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:32:50.740254 | orchestrator | 2026-02-20 02:32:50.740265 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-20 02:32:50.740276 | orchestrator | Friday 20 February 2026 02:32:40 +0000 (0:00:00.471) 0:02:30.893 ******* 2026-02-20 02:32:50.740287 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:32:50.740298 | orchestrator | 2026-02-20 02:32:50.740309 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-20 02:32:50.740319 | orchestrator | Friday 20 February 2026 02:32:43 +0000 (0:00:02.771) 0:02:33.665 ******* 2026-02-20 02:32:50.740330 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:32:50.740341 | orchestrator | 2026-02-20 02:32:50.740351 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-20 02:32:50.740363 | orchestrator | Friday 20 February 2026 02:32:45 +0000 (0:00:02.321) 0:02:35.987 ******* 2026-02-20 02:32:50.740381 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:32:50.740400 | orchestrator | 2026-02-20 02:32:50.740417 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-20 02:32:50.740429 | orchestrator | Friday 20 February 2026 02:32:48 +0000 (0:00:02.680) 0:02:38.668 ******* 2026-02-20 02:32:50.740440 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:32:50.740450 | orchestrator | 2026-02-20 02:32:50.740462 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:32:50.740481 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-20 02:32:50.740504 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-20 02:32:50.740515 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-20 02:32:50.740526 | orchestrator | 2026-02-20 02:32:50.740536 | orchestrator | 2026-02-20 02:32:50.740547 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:32:50.740558 | orchestrator | Friday 20 February 2026 02:32:50 +0000 (0:00:02.683) 0:02:41.351 ******* 2026-02-20 02:32:50.740568 | orchestrator | =============================================================================== 2026-02-20 02:32:50.740579 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 83.01s 2026-02-20 02:32:50.740589 | orchestrator | opensearch : Restart opensearch container ------------------------------ 50.60s 2026-02-20 02:32:50.740600 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.77s 2026-02-20 02:32:50.740610 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.68s 2026-02-20 02:32:50.740621 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.68s 2026-02-20 02:32:50.740631 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.37s 2026-02-20 02:32:50.740642 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.32s 2026-02-20 02:32:50.740652 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.32s 2026-02-20 02:32:50.740662 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.20s 2026-02-20 02:32:50.740673 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.67s 2026-02-20 02:32:50.740683 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.62s 2026-02-20 02:32:50.740717 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.56s 2026-02-20 02:32:50.740728 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.88s 2026-02-20 02:32:50.740738 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.65s 2026-02-20 02:32:50.740749 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.64s 2026-02-20 02:32:50.740760 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2026-02-20 02:32:50.740779 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.48s 2026-02-20 02:32:51.015172 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2026-02-20 02:32:51.015276 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.45s 2026-02-20 02:32:51.015291 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2026-02-20 02:32:53.233466 | orchestrator | 2026-02-20 02:32:53 | INFO  | Task 4a531200-4725-48e3-9458-ed22cfd67a94 (memcached) was prepared for execution. 2026-02-20 02:32:53.233571 | orchestrator | 2026-02-20 02:32:53 | INFO  | It takes a moment until task 4a531200-4725-48e3-9458-ed22cfd67a94 (memcached) has been started and output is visible here. 2026-02-20 02:33:04.592667 | orchestrator | 2026-02-20 02:33:04.592866 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 02:33:04.592898 | orchestrator | 2026-02-20 02:33:04.592919 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 02:33:04.592939 | orchestrator | Friday 20 February 2026 02:32:57 +0000 (0:00:00.246) 0:00:00.246 ******* 2026-02-20 02:33:04.592959 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:33:04.592978 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:33:04.592998 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:33:04.593017 | orchestrator | 2026-02-20 02:33:04.593038 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 02:33:04.593095 | orchestrator | Friday 20 February 2026 02:32:57 +0000 (0:00:00.291) 0:00:00.538 ******* 2026-02-20 02:33:04.593117 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-20 02:33:04.593135 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-20 02:33:04.593181 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-20 02:33:04.593202 | orchestrator | 2026-02-20 02:33:04.593221 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-20 02:33:04.593240 | orchestrator | 2026-02-20 02:33:04.593254 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-20 02:33:04.593267 | orchestrator | Friday 20 February 2026 02:32:57 +0000 (0:00:00.393) 0:00:00.931 ******* 2026-02-20 02:33:04.593280 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:33:04.593293 | orchestrator | 2026-02-20 02:33:04.593306 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-20 02:33:04.593318 | orchestrator | Friday 20 February 2026 02:32:58 +0000 (0:00:00.493) 0:00:01.425 ******* 2026-02-20 02:33:04.593331 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-20 02:33:04.593344 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-20 02:33:04.593357 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-20 02:33:04.593369 | orchestrator | 2026-02-20 02:33:04.593382 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-20 02:33:04.593412 | orchestrator | Friday 20 February 2026 02:32:59 +0000 (0:00:00.641) 0:00:02.066 ******* 2026-02-20 02:33:04.593425 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-20 02:33:04.593438 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-20 02:33:04.593450 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-20 02:33:04.593469 | orchestrator | 2026-02-20 02:33:04.593487 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-20 02:33:04.593505 | orchestrator | Friday 20 February 2026 02:33:00 +0000 (0:00:01.579) 0:00:03.646 ******* 2026-02-20 02:33:04.593522 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:33:04.593541 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:33:04.593561 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:33:04.593578 | orchestrator | 2026-02-20 02:33:04.593596 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-20 02:33:04.593614 | orchestrator | Friday 20 February 2026 02:33:02 +0000 (0:00:01.432) 0:00:05.079 ******* 2026-02-20 02:33:04.593632 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:33:04.593650 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:33:04.593668 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:33:04.593738 | orchestrator | 2026-02-20 02:33:04.593757 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:33:04.593770 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:33:04.593782 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:33:04.593793 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:33:04.593804 | orchestrator | 2026-02-20 02:33:04.593815 | orchestrator | 2026-02-20 02:33:04.593826 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:33:04.593837 | orchestrator | Friday 20 February 2026 02:33:04 +0000 (0:00:02.143) 0:00:07.222 ******* 2026-02-20 02:33:04.593848 | orchestrator | =============================================================================== 2026-02-20 02:33:04.593859 | orchestrator | memcached : Restart memcached container --------------------------------- 2.14s 2026-02-20 02:33:04.593870 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.58s 2026-02-20 02:33:04.593894 | orchestrator | memcached : Check memcached container ----------------------------------- 1.43s 2026-02-20 02:33:04.593904 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.64s 2026-02-20 02:33:04.593915 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.49s 2026-02-20 02:33:04.593926 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2026-02-20 02:33:04.593936 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-02-20 02:33:06.843523 | orchestrator | 2026-02-20 02:33:06 | INFO  | Task 9ee3eb74-4714-436c-a553-bbeaaf381b93 (redis) was prepared for execution. 2026-02-20 02:33:06.843636 | orchestrator | 2026-02-20 02:33:06 | INFO  | It takes a moment until task 9ee3eb74-4714-436c-a553-bbeaaf381b93 (redis) has been started and output is visible here. 2026-02-20 02:33:14.631873 | orchestrator | 2026-02-20 02:33:14.631997 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 02:33:14.632015 | orchestrator | 2026-02-20 02:33:14.632027 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 02:33:14.632039 | orchestrator | Friday 20 February 2026 02:33:10 +0000 (0:00:00.183) 0:00:00.183 ******* 2026-02-20 02:33:14.632049 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:33:14.632062 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:33:14.632073 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:33:14.632084 | orchestrator | 2026-02-20 02:33:14.632095 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 02:33:14.632106 | orchestrator | Friday 20 February 2026 02:33:10 +0000 (0:00:00.224) 0:00:00.407 ******* 2026-02-20 02:33:14.632116 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-20 02:33:14.632128 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-20 02:33:14.632138 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-20 02:33:14.632149 | orchestrator | 2026-02-20 02:33:14.632160 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-20 02:33:14.632170 | orchestrator | 2026-02-20 02:33:14.632181 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-20 02:33:14.632192 | orchestrator | Friday 20 February 2026 02:33:10 +0000 (0:00:00.294) 0:00:00.702 ******* 2026-02-20 02:33:14.632202 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:33:14.632214 | orchestrator | 2026-02-20 02:33:14.632225 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-20 02:33:14.632237 | orchestrator | Friday 20 February 2026 02:33:11 +0000 (0:00:00.341) 0:00:01.044 ******* 2026-02-20 02:33:14.632268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 02:33:14.632287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 02:33:14.632299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 02:33:14.632333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 02:33:14.632367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 02:33:14.632381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 02:33:14.632394 | orchestrator | 2026-02-20 02:33:14.632407 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-20 02:33:14.632419 | orchestrator | Friday 20 February 2026 02:33:12 +0000 (0:00:01.019) 0:00:02.063 ******* 2026-02-20 02:33:14.632432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 02:33:14.632536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 02:33:14.632569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 02:33:14.632582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 02:33:14.632605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 02:33:18.640268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 02:33:18.640404 | orchestrator | 2026-02-20 02:33:18.640433 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-20 02:33:18.640455 | orchestrator | Friday 20 February 2026 02:33:14 +0000 (0:00:02.280) 0:00:04.344 ******* 2026-02-20 02:33:18.640477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 02:33:18.640521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 02:33:18.640577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 02:33:18.640599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 02:33:18.640620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 02:33:18.640664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 02:33:18.640715 | orchestrator | 2026-02-20 02:33:18.640736 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-20 02:33:18.640756 | orchestrator | Friday 20 February 2026 02:33:16 +0000 (0:00:02.307) 0:00:06.652 ******* 2026-02-20 02:33:18.640777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 02:33:18.640805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 02:33:18.640839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 02:33:18.640860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 02:33:18.640881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 02:33:18.640916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 02:33:29.926680 | orchestrator | 2026-02-20 02:33:29.926814 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-20 02:33:29.926829 | orchestrator | Friday 20 February 2026 02:33:18 +0000 (0:00:01.492) 0:00:08.144 ******* 2026-02-20 02:33:29.926839 | orchestrator | 2026-02-20 02:33:29.926850 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-20 02:33:29.926860 | orchestrator | Friday 20 February 2026 02:33:18 +0000 (0:00:00.072) 0:00:08.217 ******* 2026-02-20 02:33:29.926869 | orchestrator | 2026-02-20 02:33:29.926879 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-20 02:33:29.926889 | orchestrator | Friday 20 February 2026 02:33:18 +0000 (0:00:00.064) 0:00:08.281 ******* 2026-02-20 02:33:29.926899 | orchestrator | 2026-02-20 02:33:29.926908 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-20 02:33:29.926944 | orchestrator | Friday 20 February 2026 02:33:18 +0000 (0:00:00.064) 0:00:08.345 ******* 2026-02-20 02:33:29.926955 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:33:29.926966 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:33:29.926975 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:33:29.926985 | orchestrator | 2026-02-20 02:33:29.926994 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-20 02:33:29.927004 | orchestrator | Friday 20 February 2026 02:33:26 +0000 (0:00:07.915) 0:00:16.261 ******* 2026-02-20 02:33:29.927014 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:33:29.927024 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:33:29.927034 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:33:29.927043 | orchestrator | 2026-02-20 02:33:29.927067 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:33:29.927078 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:33:29.927089 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:33:29.927099 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:33:29.927108 | orchestrator | 2026-02-20 02:33:29.927118 | orchestrator | 2026-02-20 02:33:29.927128 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:33:29.927137 | orchestrator | Friday 20 February 2026 02:33:29 +0000 (0:00:03.110) 0:00:19.372 ******* 2026-02-20 02:33:29.927147 | orchestrator | =============================================================================== 2026-02-20 02:33:29.927156 | orchestrator | redis : Restart redis container ----------------------------------------- 7.92s 2026-02-20 02:33:29.927166 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.11s 2026-02-20 02:33:29.927176 | orchestrator | redis : Copying over redis config files --------------------------------- 2.31s 2026-02-20 02:33:29.927185 | orchestrator | redis : Copying over default config.json files -------------------------- 2.28s 2026-02-20 02:33:29.927195 | orchestrator | redis : Check redis containers ------------------------------------------ 1.49s 2026-02-20 02:33:29.927204 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.02s 2026-02-20 02:33:29.927214 | orchestrator | redis : include_tasks --------------------------------------------------- 0.34s 2026-02-20 02:33:29.927223 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.29s 2026-02-20 02:33:29.927234 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.22s 2026-02-20 02:33:29.927245 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.20s 2026-02-20 02:33:32.160166 | orchestrator | 2026-02-20 02:33:32 | INFO  | Task 2a9285a5-04e8-4a51-b757-fec7c64774cd (mariadb) was prepared for execution. 2026-02-20 02:33:32.160747 | orchestrator | 2026-02-20 02:33:32 | INFO  | It takes a moment until task 2a9285a5-04e8-4a51-b757-fec7c64774cd (mariadb) has been started and output is visible here. 2026-02-20 02:33:43.976082 | orchestrator | 2026-02-20 02:33:43.976204 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 02:33:43.976221 | orchestrator | 2026-02-20 02:33:43.976233 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 02:33:43.976244 | orchestrator | Friday 20 February 2026 02:33:35 +0000 (0:00:00.122) 0:00:00.122 ******* 2026-02-20 02:33:43.976256 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:33:43.976267 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:33:43.976279 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:33:43.976290 | orchestrator | 2026-02-20 02:33:43.976301 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 02:33:43.976312 | orchestrator | Friday 20 February 2026 02:33:36 +0000 (0:00:00.228) 0:00:00.351 ******* 2026-02-20 02:33:43.976348 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-20 02:33:43.976360 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-20 02:33:43.976371 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-20 02:33:43.976381 | orchestrator | 2026-02-20 02:33:43.976392 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-20 02:33:43.976403 | orchestrator | 2026-02-20 02:33:43.976414 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-20 02:33:43.976425 | orchestrator | Friday 20 February 2026 02:33:36 +0000 (0:00:00.406) 0:00:00.757 ******* 2026-02-20 02:33:43.976439 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 02:33:43.976460 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-20 02:33:43.976486 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-20 02:33:43.976509 | orchestrator | 2026-02-20 02:33:43.976529 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-20 02:33:43.976564 | orchestrator | Friday 20 February 2026 02:33:36 +0000 (0:00:00.331) 0:00:01.088 ******* 2026-02-20 02:33:43.976584 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:33:43.976603 | orchestrator | 2026-02-20 02:33:43.976622 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-20 02:33:43.976640 | orchestrator | Friday 20 February 2026 02:33:37 +0000 (0:00:00.479) 0:00:01.568 ******* 2026-02-20 02:33:43.976712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 02:33:43.976766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 02:33:43.976802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 02:33:43.976816 | orchestrator | 2026-02-20 02:33:43.976829 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-20 02:33:43.976842 | orchestrator | Friday 20 February 2026 02:33:39 +0000 (0:00:02.157) 0:00:03.726 ******* 2026-02-20 02:33:43.976854 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:33:43.976867 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:33:43.976879 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:33:43.976892 | orchestrator | 2026-02-20 02:33:43.976904 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-20 02:33:43.976917 | orchestrator | Friday 20 February 2026 02:33:39 +0000 (0:00:00.529) 0:00:04.255 ******* 2026-02-20 02:33:43.976928 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:33:43.976938 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:33:43.976949 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:33:43.976960 | orchestrator | 2026-02-20 02:33:43.976971 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-20 02:33:43.976989 | orchestrator | Friday 20 February 2026 02:33:41 +0000 (0:00:01.312) 0:00:05.568 ******* 2026-02-20 02:33:43.977010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 02:33:51.116021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 02:33:51.116133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 02:33:51.116175 | orchestrator | 2026-02-20 02:33:51.116189 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-20 02:33:51.116202 | orchestrator | Friday 20 February 2026 02:33:43 +0000 (0:00:02.653) 0:00:08.221 ******* 2026-02-20 02:33:51.116214 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:33:51.116226 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:33:51.116237 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:33:51.116248 | orchestrator | 2026-02-20 02:33:51.116260 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-20 02:33:51.116288 | orchestrator | Friday 20 February 2026 02:33:45 +0000 (0:00:01.131) 0:00:09.353 ******* 2026-02-20 02:33:51.116300 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:33:51.116311 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:33:51.116322 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:33:51.116333 | orchestrator | 2026-02-20 02:33:51.116344 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-20 02:33:51.116355 | orchestrator | Friday 20 February 2026 02:33:48 +0000 (0:00:03.496) 0:00:12.849 ******* 2026-02-20 02:33:51.116367 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:33:51.116378 | orchestrator | 2026-02-20 02:33:51.116389 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-20 02:33:51.116400 | orchestrator | Friday 20 February 2026 02:33:49 +0000 (0:00:00.484) 0:00:13.333 ******* 2026-02-20 02:33:51.116419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 02:33:51.116439 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:33:51.116459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 02:33:55.423493 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:33:55.423626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 02:33:55.423673 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:33:55.423712 | orchestrator | 2026-02-20 02:33:55.423725 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-20 02:33:55.423737 | orchestrator | Friday 20 February 2026 02:33:51 +0000 (0:00:02.030) 0:00:15.364 ******* 2026-02-20 02:33:55.423759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 02:33:55.423780 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:33:55.423840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 02:33:55.423882 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:33:55.423904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 02:33:55.423925 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:33:55.423939 | orchestrator | 2026-02-20 02:33:55.423951 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-20 02:33:55.423962 | orchestrator | Friday 20 February 2026 02:33:53 +0000 (0:00:02.219) 0:00:17.584 ******* 2026-02-20 02:33:55.423990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 02:33:58.086449 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:33:58.086567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 02:33:58.086589 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:33:58.086633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 02:33:58.086744 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:33:58.086769 | orchestrator | 2026-02-20 02:33:58.086789 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-20 02:33:58.086810 | orchestrator | Friday 20 February 2026 02:33:55 +0000 (0:00:02.090) 0:00:19.674 ******* 2026-02-20 02:33:58.086855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 02:33:58.086879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 02:33:58.086914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 02:36:06.609507 | orchestrator | 2026-02-20 02:36:06.609623 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-20 02:36:06.609641 | orchestrator | Friday 20 February 2026 02:33:58 +0000 (0:00:02.659) 0:00:22.334 ******* 2026-02-20 02:36:06.609652 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:36:06.609665 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:36:06.609729 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:36:06.609743 | orchestrator | 2026-02-20 02:36:06.609755 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-20 02:36:06.609766 | orchestrator | Friday 20 February 2026 02:33:58 +0000 (0:00:00.795) 0:00:23.129 ******* 2026-02-20 02:36:06.609778 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:36:06.609790 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:36:06.609801 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:36:06.609812 | orchestrator | 2026-02-20 02:36:06.609823 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-20 02:36:06.609835 | orchestrator | Friday 20 February 2026 02:33:59 +0000 (0:00:00.497) 0:00:23.627 ******* 2026-02-20 02:36:06.609846 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:36:06.609857 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:36:06.609868 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:36:06.609879 | orchestrator | 2026-02-20 02:36:06.609890 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-20 02:36:06.609901 | orchestrator | Friday 20 February 2026 02:33:59 +0000 (0:00:00.323) 0:00:23.951 ******* 2026-02-20 02:36:06.609913 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-20 02:36:06.609926 | orchestrator | ...ignoring 2026-02-20 02:36:06.609963 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-20 02:36:06.609975 | orchestrator | ...ignoring 2026-02-20 02:36:06.609986 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-20 02:36:06.609997 | orchestrator | ...ignoring 2026-02-20 02:36:06.610008 | orchestrator | 2026-02-20 02:36:06.610072 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-20 02:36:06.610087 | orchestrator | Friday 20 February 2026 02:34:10 +0000 (0:00:10.789) 0:00:34.740 ******* 2026-02-20 02:36:06.610101 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:36:06.610114 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:36:06.610136 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:36:06.610148 | orchestrator | 2026-02-20 02:36:06.610160 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-20 02:36:06.610174 | orchestrator | Friday 20 February 2026 02:34:10 +0000 (0:00:00.393) 0:00:35.134 ******* 2026-02-20 02:36:06.610203 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:36:06.610216 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:36:06.610229 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:36:06.610242 | orchestrator | 2026-02-20 02:36:06.610254 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-20 02:36:06.610268 | orchestrator | Friday 20 February 2026 02:34:11 +0000 (0:00:00.592) 0:00:35.726 ******* 2026-02-20 02:36:06.610281 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:36:06.610293 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:36:06.610306 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:36:06.610318 | orchestrator | 2026-02-20 02:36:06.610330 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-20 02:36:06.610343 | orchestrator | Friday 20 February 2026 02:34:11 +0000 (0:00:00.393) 0:00:36.120 ******* 2026-02-20 02:36:06.610356 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:36:06.610368 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:36:06.610381 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:36:06.610393 | orchestrator | 2026-02-20 02:36:06.610404 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-20 02:36:06.610415 | orchestrator | Friday 20 February 2026 02:34:12 +0000 (0:00:00.399) 0:00:36.519 ******* 2026-02-20 02:36:06.610426 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:36:06.610436 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:36:06.610447 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:36:06.610458 | orchestrator | 2026-02-20 02:36:06.610469 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-20 02:36:06.610481 | orchestrator | Friday 20 February 2026 02:34:12 +0000 (0:00:00.402) 0:00:36.922 ******* 2026-02-20 02:36:06.610492 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:36:06.610502 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:36:06.610513 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:36:06.610524 | orchestrator | 2026-02-20 02:36:06.610535 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-20 02:36:06.610546 | orchestrator | Friday 20 February 2026 02:34:13 +0000 (0:00:00.762) 0:00:37.684 ******* 2026-02-20 02:36:06.610557 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:36:06.610568 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:36:06.610579 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-20 02:36:06.610590 | orchestrator | 2026-02-20 02:36:06.610601 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-20 02:36:06.610612 | orchestrator | Friday 20 February 2026 02:34:13 +0000 (0:00:00.358) 0:00:38.043 ******* 2026-02-20 02:36:06.610622 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:36:06.610633 | orchestrator | 2026-02-20 02:36:06.610644 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-20 02:36:06.610664 | orchestrator | Friday 20 February 2026 02:34:23 +0000 (0:00:10.038) 0:00:48.082 ******* 2026-02-20 02:36:06.610698 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:36:06.610711 | orchestrator | 2026-02-20 02:36:06.610722 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-20 02:36:06.610733 | orchestrator | Friday 20 February 2026 02:34:23 +0000 (0:00:00.128) 0:00:48.211 ******* 2026-02-20 02:36:06.610744 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:36:06.610773 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:36:06.610785 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:36:06.610844 | orchestrator | 2026-02-20 02:36:06.610856 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-20 02:36:06.610867 | orchestrator | Friday 20 February 2026 02:34:24 +0000 (0:00:00.902) 0:00:49.113 ******* 2026-02-20 02:36:06.610878 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:36:06.610889 | orchestrator | 2026-02-20 02:36:06.610900 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-20 02:36:06.610911 | orchestrator | Friday 20 February 2026 02:34:32 +0000 (0:00:07.205) 0:00:56.319 ******* 2026-02-20 02:36:06.610921 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:36:06.610932 | orchestrator | 2026-02-20 02:36:06.610943 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-20 02:36:06.610954 | orchestrator | Friday 20 February 2026 02:34:34 +0000 (0:00:02.579) 0:00:58.899 ******* 2026-02-20 02:36:06.610965 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:36:06.610976 | orchestrator | 2026-02-20 02:36:06.610987 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-20 02:36:06.610998 | orchestrator | Friday 20 February 2026 02:34:37 +0000 (0:00:02.448) 0:01:01.348 ******* 2026-02-20 02:36:06.611008 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:36:06.611019 | orchestrator | 2026-02-20 02:36:06.611030 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-20 02:36:06.611041 | orchestrator | Friday 20 February 2026 02:34:37 +0000 (0:00:00.109) 0:01:01.458 ******* 2026-02-20 02:36:06.611052 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:36:06.611063 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:36:06.611074 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:36:06.611085 | orchestrator | 2026-02-20 02:36:06.611096 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-20 02:36:06.611107 | orchestrator | Friday 20 February 2026 02:34:37 +0000 (0:00:00.296) 0:01:01.754 ******* 2026-02-20 02:36:06.611118 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:36:06.611129 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-20 02:36:06.611140 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:36:06.611151 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:36:06.611161 | orchestrator | 2026-02-20 02:36:06.611172 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-20 02:36:06.611183 | orchestrator | skipping: no hosts matched 2026-02-20 02:36:06.611194 | orchestrator | 2026-02-20 02:36:06.611205 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-20 02:36:06.611216 | orchestrator | 2026-02-20 02:36:06.611227 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-20 02:36:06.611238 | orchestrator | Friday 20 February 2026 02:34:37 +0000 (0:00:00.479) 0:01:02.234 ******* 2026-02-20 02:36:06.611249 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:36:06.611260 | orchestrator | 2026-02-20 02:36:06.611277 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-20 02:36:06.611288 | orchestrator | Friday 20 February 2026 02:34:53 +0000 (0:00:15.514) 0:01:17.748 ******* 2026-02-20 02:36:06.611299 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:36:06.611310 | orchestrator | 2026-02-20 02:36:06.611321 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-20 02:36:06.611336 | orchestrator | Friday 20 February 2026 02:35:10 +0000 (0:00:16.562) 0:01:34.310 ******* 2026-02-20 02:36:06.611355 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:36:06.611366 | orchestrator | 2026-02-20 02:36:06.611377 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-20 02:36:06.611388 | orchestrator | 2026-02-20 02:36:06.611399 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-20 02:36:06.611410 | orchestrator | Friday 20 February 2026 02:35:12 +0000 (0:00:02.243) 0:01:36.554 ******* 2026-02-20 02:36:06.611421 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:36:06.611432 | orchestrator | 2026-02-20 02:36:06.611443 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-20 02:36:06.611454 | orchestrator | Friday 20 February 2026 02:35:29 +0000 (0:00:17.279) 0:01:53.833 ******* 2026-02-20 02:36:06.611465 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:36:06.611476 | orchestrator | 2026-02-20 02:36:06.611487 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-20 02:36:06.611498 | orchestrator | Friday 20 February 2026 02:35:46 +0000 (0:00:16.551) 0:02:10.385 ******* 2026-02-20 02:36:06.611509 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:36:06.611520 | orchestrator | 2026-02-20 02:36:06.611531 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-20 02:36:06.611542 | orchestrator | 2026-02-20 02:36:06.611553 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-20 02:36:06.611564 | orchestrator | Friday 20 February 2026 02:35:48 +0000 (0:00:02.415) 0:02:12.800 ******* 2026-02-20 02:36:06.611575 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:36:06.611585 | orchestrator | 2026-02-20 02:36:06.611597 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-20 02:36:06.611607 | orchestrator | Friday 20 February 2026 02:35:58 +0000 (0:00:09.463) 0:02:22.264 ******* 2026-02-20 02:36:06.611618 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:36:06.611629 | orchestrator | 2026-02-20 02:36:06.611640 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-20 02:36:06.611651 | orchestrator | Friday 20 February 2026 02:36:03 +0000 (0:00:05.554) 0:02:27.818 ******* 2026-02-20 02:36:06.611662 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:36:06.611673 | orchestrator | 2026-02-20 02:36:06.611762 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-20 02:36:06.611781 | orchestrator | 2026-02-20 02:36:06.611794 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-20 02:36:06.611805 | orchestrator | Friday 20 February 2026 02:36:06 +0000 (0:00:02.533) 0:02:30.352 ******* 2026-02-20 02:36:06.611816 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:36:06.611827 | orchestrator | 2026-02-20 02:36:06.611838 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-20 02:36:06.611858 | orchestrator | Friday 20 February 2026 02:36:06 +0000 (0:00:00.498) 0:02:30.850 ******* 2026-02-20 02:36:19.142108 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:36:19.142217 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:36:19.142232 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:36:19.142243 | orchestrator | 2026-02-20 02:36:19.142255 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-20 02:36:19.142266 | orchestrator | Friday 20 February 2026 02:36:08 +0000 (0:00:02.401) 0:02:33.251 ******* 2026-02-20 02:36:19.142275 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:36:19.142285 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:36:19.142295 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:36:19.142305 | orchestrator | 2026-02-20 02:36:19.142315 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-20 02:36:19.142324 | orchestrator | Friday 20 February 2026 02:36:11 +0000 (0:00:02.176) 0:02:35.427 ******* 2026-02-20 02:36:19.142334 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:36:19.142344 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:36:19.142353 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:36:19.142385 | orchestrator | 2026-02-20 02:36:19.142396 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-20 02:36:19.142406 | orchestrator | Friday 20 February 2026 02:36:13 +0000 (0:00:02.402) 0:02:37.830 ******* 2026-02-20 02:36:19.142415 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:36:19.142425 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:36:19.142434 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:36:19.142444 | orchestrator | 2026-02-20 02:36:19.142453 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-20 02:36:19.142463 | orchestrator | Friday 20 February 2026 02:36:15 +0000 (0:00:02.193) 0:02:40.023 ******* 2026-02-20 02:36:19.142472 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:36:19.142483 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:36:19.142493 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:36:19.142502 | orchestrator | 2026-02-20 02:36:19.142512 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-20 02:36:19.142521 | orchestrator | Friday 20 February 2026 02:36:18 +0000 (0:00:02.732) 0:02:42.755 ******* 2026-02-20 02:36:19.142531 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:36:19.142540 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:36:19.142550 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:36:19.142559 | orchestrator | 2026-02-20 02:36:19.142569 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:36:19.142579 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-20 02:36:19.142590 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-20 02:36:19.142615 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-20 02:36:19.142628 | orchestrator | 2026-02-20 02:36:19.142638 | orchestrator | 2026-02-20 02:36:19.142650 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:36:19.142661 | orchestrator | Friday 20 February 2026 02:36:18 +0000 (0:00:00.359) 0:02:43.115 ******* 2026-02-20 02:36:19.142672 | orchestrator | =============================================================================== 2026-02-20 02:36:19.142704 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.11s 2026-02-20 02:36:19.142715 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 32.79s 2026-02-20 02:36:19.142726 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.79s 2026-02-20 02:36:19.142738 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.04s 2026-02-20 02:36:19.142749 | orchestrator | mariadb : Restart MariaDB container ------------------------------------- 9.46s 2026-02-20 02:36:19.142760 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.21s 2026-02-20 02:36:19.142771 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.55s 2026-02-20 02:36:19.142782 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.66s 2026-02-20 02:36:19.142793 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.50s 2026-02-20 02:36:19.142804 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.73s 2026-02-20 02:36:19.142815 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.66s 2026-02-20 02:36:19.142826 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.65s 2026-02-20 02:36:19.142836 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.58s 2026-02-20 02:36:19.142847 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.53s 2026-02-20 02:36:19.142858 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.45s 2026-02-20 02:36:19.142877 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.40s 2026-02-20 02:36:19.142888 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.40s 2026-02-20 02:36:19.142899 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.22s 2026-02-20 02:36:19.142911 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.19s 2026-02-20 02:36:19.142922 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.18s 2026-02-20 02:36:21.314773 | orchestrator | 2026-02-20 02:36:21 | INFO  | Task 7961972a-4d1a-41f0-b0f5-a4f2b5d757b8 (rabbitmq) was prepared for execution. 2026-02-20 02:36:21.314874 | orchestrator | 2026-02-20 02:36:21 | INFO  | It takes a moment until task 7961972a-4d1a-41f0-b0f5-a4f2b5d757b8 (rabbitmq) has been started and output is visible here. 2026-02-20 02:36:32.332851 | orchestrator | 2026-02-20 02:36:32.332963 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 02:36:32.332979 | orchestrator | 2026-02-20 02:36:32.332992 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 02:36:32.333004 | orchestrator | Friday 20 February 2026 02:36:24 +0000 (0:00:00.122) 0:00:00.122 ******* 2026-02-20 02:36:32.333015 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:36:32.333027 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:36:32.333038 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:36:32.333049 | orchestrator | 2026-02-20 02:36:32.333060 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 02:36:32.333072 | orchestrator | Friday 20 February 2026 02:36:24 +0000 (0:00:00.218) 0:00:00.340 ******* 2026-02-20 02:36:32.333083 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-20 02:36:32.333095 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-20 02:36:32.333106 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-20 02:36:32.333118 | orchestrator | 2026-02-20 02:36:32.333129 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-20 02:36:32.333140 | orchestrator | 2026-02-20 02:36:32.333151 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-20 02:36:32.333162 | orchestrator | Friday 20 February 2026 02:36:25 +0000 (0:00:00.397) 0:00:00.737 ******* 2026-02-20 02:36:32.333173 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:36:32.333185 | orchestrator | 2026-02-20 02:36:32.333196 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-20 02:36:32.333207 | orchestrator | Friday 20 February 2026 02:36:25 +0000 (0:00:00.363) 0:00:01.100 ******* 2026-02-20 02:36:32.333218 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:36:32.333229 | orchestrator | 2026-02-20 02:36:32.333240 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-20 02:36:32.333251 | orchestrator | Friday 20 February 2026 02:36:26 +0000 (0:00:00.908) 0:00:02.009 ******* 2026-02-20 02:36:32.333262 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:36:32.333273 | orchestrator | 2026-02-20 02:36:32.333285 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-20 02:36:32.333295 | orchestrator | Friday 20 February 2026 02:36:26 +0000 (0:00:00.318) 0:00:02.327 ******* 2026-02-20 02:36:32.333306 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:36:32.333319 | orchestrator | 2026-02-20 02:36:32.333332 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-20 02:36:32.333369 | orchestrator | Friday 20 February 2026 02:36:27 +0000 (0:00:00.316) 0:00:02.644 ******* 2026-02-20 02:36:32.333391 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:36:32.333411 | orchestrator | 2026-02-20 02:36:32.333432 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-20 02:36:32.333453 | orchestrator | Friday 20 February 2026 02:36:27 +0000 (0:00:00.316) 0:00:02.961 ******* 2026-02-20 02:36:32.333492 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:36:32.333506 | orchestrator | 2026-02-20 02:36:32.333518 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-20 02:36:32.333530 | orchestrator | Friday 20 February 2026 02:36:28 +0000 (0:00:00.419) 0:00:03.381 ******* 2026-02-20 02:36:32.333543 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:36:32.333555 | orchestrator | 2026-02-20 02:36:32.333567 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-20 02:36:32.333578 | orchestrator | Friday 20 February 2026 02:36:28 +0000 (0:00:00.706) 0:00:04.087 ******* 2026-02-20 02:36:32.333590 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:36:32.333602 | orchestrator | 2026-02-20 02:36:32.333614 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-20 02:36:32.333626 | orchestrator | Friday 20 February 2026 02:36:29 +0000 (0:00:00.781) 0:00:04.869 ******* 2026-02-20 02:36:32.333639 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:36:32.333651 | orchestrator | 2026-02-20 02:36:32.333663 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-20 02:36:32.333675 | orchestrator | Friday 20 February 2026 02:36:29 +0000 (0:00:00.322) 0:00:05.191 ******* 2026-02-20 02:36:32.333738 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:36:32.333749 | orchestrator | 2026-02-20 02:36:32.333759 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-20 02:36:32.333770 | orchestrator | Friday 20 February 2026 02:36:30 +0000 (0:00:00.298) 0:00:05.489 ******* 2026-02-20 02:36:32.333807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 02:36:32.333824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 02:36:32.333844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 02:36:32.333865 | orchestrator | 2026-02-20 02:36:32.333877 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-20 02:36:32.333888 | orchestrator | Friday 20 February 2026 02:36:30 +0000 (0:00:00.732) 0:00:06.222 ******* 2026-02-20 02:36:32.333900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 02:36:32.333921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 02:36:50.437821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 02:36:50.437969 | orchestrator | 2026-02-20 02:36:50.437989 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-20 02:36:50.438003 | orchestrator | Friday 20 February 2026 02:36:32 +0000 (0:00:01.446) 0:00:07.669 ******* 2026-02-20 02:36:50.438081 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-20 02:36:50.438112 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-20 02:36:50.438124 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-20 02:36:50.438135 | orchestrator | 2026-02-20 02:36:50.438147 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-20 02:36:50.438158 | orchestrator | Friday 20 February 2026 02:36:33 +0000 (0:00:01.319) 0:00:08.989 ******* 2026-02-20 02:36:50.438169 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-20 02:36:50.438180 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-20 02:36:50.438191 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-20 02:36:50.438202 | orchestrator | 2026-02-20 02:36:50.438213 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-20 02:36:50.438224 | orchestrator | Friday 20 February 2026 02:36:35 +0000 (0:00:01.646) 0:00:10.635 ******* 2026-02-20 02:36:50.438234 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-20 02:36:50.438245 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-20 02:36:50.438256 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-20 02:36:50.438267 | orchestrator | 2026-02-20 02:36:50.438280 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-20 02:36:50.438292 | orchestrator | Friday 20 February 2026 02:36:36 +0000 (0:00:01.390) 0:00:12.026 ******* 2026-02-20 02:36:50.438304 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-20 02:36:50.438317 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-20 02:36:50.438329 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-20 02:36:50.438342 | orchestrator | 2026-02-20 02:36:50.438354 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-20 02:36:50.438366 | orchestrator | Friday 20 February 2026 02:36:38 +0000 (0:00:01.632) 0:00:13.658 ******* 2026-02-20 02:36:50.438379 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-20 02:36:50.438391 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-20 02:36:50.438404 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-20 02:36:50.438417 | orchestrator | 2026-02-20 02:36:50.438429 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-20 02:36:50.438442 | orchestrator | Friday 20 February 2026 02:36:39 +0000 (0:00:01.373) 0:00:15.032 ******* 2026-02-20 02:36:50.438454 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-20 02:36:50.438467 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-20 02:36:50.438479 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-20 02:36:50.438502 | orchestrator | 2026-02-20 02:36:50.438515 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-20 02:36:50.438528 | orchestrator | Friday 20 February 2026 02:36:41 +0000 (0:00:01.339) 0:00:16.371 ******* 2026-02-20 02:36:50.438541 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:36:50.438553 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:36:50.438582 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:36:50.438594 | orchestrator | 2026-02-20 02:36:50.438605 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-20 02:36:50.438616 | orchestrator | Friday 20 February 2026 02:36:41 +0000 (0:00:00.390) 0:00:16.761 ******* 2026-02-20 02:36:50.438628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 02:36:50.438648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 02:36:50.438662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 02:36:50.438674 | orchestrator | 2026-02-20 02:36:50.438713 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-20 02:36:50.438725 | orchestrator | Friday 20 February 2026 02:36:42 +0000 (0:00:01.132) 0:00:17.893 ******* 2026-02-20 02:36:50.438736 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:36:50.438747 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:36:50.438757 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:36:50.438768 | orchestrator | 2026-02-20 02:36:50.438779 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-20 02:36:50.438790 | orchestrator | Friday 20 February 2026 02:36:43 +0000 (0:00:00.791) 0:00:18.684 ******* 2026-02-20 02:36:50.438800 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:36:50.438811 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:36:50.438834 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:36:50.438845 | orchestrator | 2026-02-20 02:36:50.438867 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-20 02:36:50.438885 | orchestrator | Friday 20 February 2026 02:36:50 +0000 (0:00:07.086) 0:00:25.770 ******* 2026-02-20 02:38:27.599080 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:38:27.599227 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:38:27.599247 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:38:27.599259 | orchestrator | 2026-02-20 02:38:27.599271 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-20 02:38:27.599283 | orchestrator | 2026-02-20 02:38:27.599294 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-20 02:38:27.599305 | orchestrator | Friday 20 February 2026 02:36:50 +0000 (0:00:00.439) 0:00:26.210 ******* 2026-02-20 02:38:27.599316 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:38:27.599328 | orchestrator | 2026-02-20 02:38:27.599339 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-20 02:38:27.599350 | orchestrator | Friday 20 February 2026 02:36:51 +0000 (0:00:00.617) 0:00:26.827 ******* 2026-02-20 02:38:27.599360 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:38:27.599371 | orchestrator | 2026-02-20 02:38:27.599382 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-20 02:38:27.599392 | orchestrator | Friday 20 February 2026 02:36:51 +0000 (0:00:00.221) 0:00:27.048 ******* 2026-02-20 02:38:27.599403 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:38:27.599414 | orchestrator | 2026-02-20 02:38:27.599425 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-20 02:38:27.599436 | orchestrator | Friday 20 February 2026 02:36:58 +0000 (0:00:06.723) 0:00:33.771 ******* 2026-02-20 02:38:27.599447 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:38:27.599457 | orchestrator | 2026-02-20 02:38:27.599468 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-20 02:38:27.599479 | orchestrator | 2026-02-20 02:38:27.599489 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-20 02:38:27.599500 | orchestrator | Friday 20 February 2026 02:37:48 +0000 (0:00:50.217) 0:01:23.989 ******* 2026-02-20 02:38:27.599511 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:38:27.599521 | orchestrator | 2026-02-20 02:38:27.599532 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-20 02:38:27.599559 | orchestrator | Friday 20 February 2026 02:37:49 +0000 (0:00:00.585) 0:01:24.575 ******* 2026-02-20 02:38:27.599571 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:38:27.599581 | orchestrator | 2026-02-20 02:38:27.599592 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-20 02:38:27.599602 | orchestrator | Friday 20 February 2026 02:37:49 +0000 (0:00:00.212) 0:01:24.787 ******* 2026-02-20 02:38:27.599615 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:38:27.599627 | orchestrator | 2026-02-20 02:38:27.599639 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-20 02:38:27.599652 | orchestrator | Friday 20 February 2026 02:37:50 +0000 (0:00:01.556) 0:01:26.343 ******* 2026-02-20 02:38:27.599664 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:38:27.599697 | orchestrator | 2026-02-20 02:38:27.599739 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-20 02:38:27.599751 | orchestrator | 2026-02-20 02:38:27.599764 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-20 02:38:27.599776 | orchestrator | Friday 20 February 2026 02:38:05 +0000 (0:00:14.617) 0:01:40.961 ******* 2026-02-20 02:38:27.599788 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:38:27.599801 | orchestrator | 2026-02-20 02:38:27.599812 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-20 02:38:27.599824 | orchestrator | Friday 20 February 2026 02:38:06 +0000 (0:00:00.765) 0:01:41.726 ******* 2026-02-20 02:38:27.599836 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:38:27.599849 | orchestrator | 2026-02-20 02:38:27.599861 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-20 02:38:27.599874 | orchestrator | Friday 20 February 2026 02:38:06 +0000 (0:00:00.214) 0:01:41.940 ******* 2026-02-20 02:38:27.599886 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:38:27.599899 | orchestrator | 2026-02-20 02:38:27.599911 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-20 02:38:27.599923 | orchestrator | Friday 20 February 2026 02:38:13 +0000 (0:00:06.615) 0:01:48.556 ******* 2026-02-20 02:38:27.599935 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:38:27.599947 | orchestrator | 2026-02-20 02:38:27.599961 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-20 02:38:27.599973 | orchestrator | 2026-02-20 02:38:27.599984 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-20 02:38:27.599994 | orchestrator | Friday 20 February 2026 02:38:24 +0000 (0:00:11.370) 0:01:59.926 ******* 2026-02-20 02:38:27.600005 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:38:27.600015 | orchestrator | 2026-02-20 02:38:27.600026 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-20 02:38:27.600037 | orchestrator | Friday 20 February 2026 02:38:25 +0000 (0:00:00.443) 0:02:00.370 ******* 2026-02-20 02:38:27.600047 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-20 02:38:27.600058 | orchestrator | enable_outward_rabbitmq_True 2026-02-20 02:38:27.600069 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-20 02:38:27.600079 | orchestrator | outward_rabbitmq_restart 2026-02-20 02:38:27.600090 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:38:27.600101 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:38:27.600111 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:38:27.600122 | orchestrator | 2026-02-20 02:38:27.600133 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-20 02:38:27.600143 | orchestrator | skipping: no hosts matched 2026-02-20 02:38:27.600154 | orchestrator | 2026-02-20 02:38:27.600164 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-20 02:38:27.600175 | orchestrator | skipping: no hosts matched 2026-02-20 02:38:27.600186 | orchestrator | 2026-02-20 02:38:27.600196 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-20 02:38:27.600207 | orchestrator | skipping: no hosts matched 2026-02-20 02:38:27.600217 | orchestrator | 2026-02-20 02:38:27.600228 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:38:27.600258 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-20 02:38:27.600272 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:38:27.600282 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:38:27.600293 | orchestrator | 2026-02-20 02:38:27.600304 | orchestrator | 2026-02-20 02:38:27.600323 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:38:27.600334 | orchestrator | Friday 20 February 2026 02:38:27 +0000 (0:00:02.280) 0:02:02.651 ******* 2026-02-20 02:38:27.600345 | orchestrator | =============================================================================== 2026-02-20 02:38:27.600356 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 76.21s 2026-02-20 02:38:27.600367 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 14.90s 2026-02-20 02:38:27.600378 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.09s 2026-02-20 02:38:27.600388 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.28s 2026-02-20 02:38:27.600399 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.97s 2026-02-20 02:38:27.600409 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.65s 2026-02-20 02:38:27.600420 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.63s 2026-02-20 02:38:27.600431 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.45s 2026-02-20 02:38:27.600441 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.39s 2026-02-20 02:38:27.600457 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.37s 2026-02-20 02:38:27.600468 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.34s 2026-02-20 02:38:27.600479 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.32s 2026-02-20 02:38:27.600489 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.13s 2026-02-20 02:38:27.600500 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.91s 2026-02-20 02:38:27.600511 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.79s 2026-02-20 02:38:27.600521 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.78s 2026-02-20 02:38:27.600532 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.73s 2026-02-20 02:38:27.600542 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.71s 2026-02-20 02:38:27.600553 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.65s 2026-02-20 02:38:27.600564 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 0.44s 2026-02-20 02:38:29.799247 | orchestrator | 2026-02-20 02:38:29 | INFO  | Task 0a2e09d4-fd65-4d35-942b-6ce3b01f3c01 (openvswitch) was prepared for execution. 2026-02-20 02:38:29.799376 | orchestrator | 2026-02-20 02:38:29 | INFO  | It takes a moment until task 0a2e09d4-fd65-4d35-942b-6ce3b01f3c01 (openvswitch) has been started and output is visible here. 2026-02-20 02:38:39.961269 | orchestrator | 2026-02-20 02:38:39.961389 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 02:38:39.961406 | orchestrator | 2026-02-20 02:38:39.961419 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 02:38:39.961430 | orchestrator | Friday 20 February 2026 02:38:33 +0000 (0:00:00.183) 0:00:00.183 ******* 2026-02-20 02:38:39.961441 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:38:39.961454 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:38:39.961464 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:38:39.961475 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:38:39.961486 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:38:39.961496 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:38:39.961507 | orchestrator | 2026-02-20 02:38:39.961518 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 02:38:39.961528 | orchestrator | Friday 20 February 2026 02:38:33 +0000 (0:00:00.465) 0:00:00.649 ******* 2026-02-20 02:38:39.961539 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-20 02:38:39.961551 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-20 02:38:39.961585 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-20 02:38:39.961597 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-20 02:38:39.961607 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-20 02:38:39.961618 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-20 02:38:39.961629 | orchestrator | 2026-02-20 02:38:39.961640 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-20 02:38:39.961651 | orchestrator | 2026-02-20 02:38:39.961662 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-20 02:38:39.961672 | orchestrator | Friday 20 February 2026 02:38:34 +0000 (0:00:00.430) 0:00:01.079 ******* 2026-02-20 02:38:39.961684 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:38:39.961696 | orchestrator | 2026-02-20 02:38:39.961707 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-20 02:38:39.961718 | orchestrator | Friday 20 February 2026 02:38:34 +0000 (0:00:00.777) 0:00:01.856 ******* 2026-02-20 02:38:39.961754 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-20 02:38:39.961766 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-20 02:38:39.961776 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-20 02:38:39.961787 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-20 02:38:39.961799 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-20 02:38:39.961811 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-20 02:38:39.961823 | orchestrator | 2026-02-20 02:38:39.961836 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-20 02:38:39.961848 | orchestrator | Friday 20 February 2026 02:38:35 +0000 (0:00:00.951) 0:00:02.808 ******* 2026-02-20 02:38:39.961860 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-20 02:38:39.961871 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-20 02:38:39.961881 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-20 02:38:39.961892 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-20 02:38:39.961902 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-20 02:38:39.961912 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-20 02:38:39.961923 | orchestrator | 2026-02-20 02:38:39.961933 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-20 02:38:39.961944 | orchestrator | Friday 20 February 2026 02:38:37 +0000 (0:00:01.368) 0:00:04.176 ******* 2026-02-20 02:38:39.961955 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-20 02:38:39.961966 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:38:39.961977 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-20 02:38:39.961988 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:38:39.961999 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-20 02:38:39.962009 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:38:39.962091 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-20 02:38:39.962103 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:38:39.962114 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-20 02:38:39.962124 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:38:39.962135 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-20 02:38:39.962146 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:38:39.962157 | orchestrator | 2026-02-20 02:38:39.962168 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-20 02:38:39.962179 | orchestrator | Friday 20 February 2026 02:38:38 +0000 (0:00:00.930) 0:00:05.107 ******* 2026-02-20 02:38:39.962190 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:38:39.962210 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:38:39.962221 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:38:39.962231 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:38:39.962242 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:38:39.962252 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:38:39.962263 | orchestrator | 2026-02-20 02:38:39.962274 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-20 02:38:39.962284 | orchestrator | Friday 20 February 2026 02:38:38 +0000 (0:00:00.556) 0:00:05.663 ******* 2026-02-20 02:38:39.962319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:38:39.962335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:38:39.962348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:38:39.962406 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:38:39.962424 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:38:39.962451 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:38:42.251585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:38:42.251688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:38:42.251704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:38:42.251716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:38:42.251792 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:38:42.251843 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:38:42.251857 | orchestrator | 2026-02-20 02:38:42.251871 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-20 02:38:42.251883 | orchestrator | Friday 20 February 2026 02:38:40 +0000 (0:00:01.261) 0:00:06.924 ******* 2026-02-20 02:38:42.251895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:38:42.251908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:38:42.251920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:38:42.251938 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:38:42.251957 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:38:42.251978 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:38:44.905095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:38:44.905197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:38:44.905214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:38:44.905267 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:38:44.905280 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:38:44.905310 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:38:44.905323 | orchestrator | 2026-02-20 02:38:44.905336 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-20 02:38:44.905348 | orchestrator | Friday 20 February 2026 02:38:42 +0000 (0:00:02.284) 0:00:09.208 ******* 2026-02-20 02:38:44.905359 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:38:44.905371 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:38:44.905382 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:38:44.905393 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:38:44.905404 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:38:44.905415 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:38:44.905425 | orchestrator | 2026-02-20 02:38:44.905437 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-20 02:38:44.905447 | orchestrator | Friday 20 February 2026 02:38:43 +0000 (0:00:00.860) 0:00:10.068 ******* 2026-02-20 02:38:44.905459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:38:44.905479 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:38:44.905496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:38:44.905508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:38:44.905528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:39:08.943060 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 02:39:08.943203 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:39:08.943285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:39:08.943304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:39:08.943316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:39:08.943348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:39:08.943360 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 02:39:08.943381 | orchestrator | 2026-02-20 02:39:08.943395 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-20 02:39:08.943408 | orchestrator | Friday 20 February 2026 02:38:44 +0000 (0:00:01.802) 0:00:11.871 ******* 2026-02-20 02:39:08.943419 | orchestrator | 2026-02-20 02:39:08.943431 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-20 02:39:08.943442 | orchestrator | Friday 20 February 2026 02:38:45 +0000 (0:00:00.270) 0:00:12.141 ******* 2026-02-20 02:39:08.943452 | orchestrator | 2026-02-20 02:39:08.943463 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-20 02:39:08.943474 | orchestrator | Friday 20 February 2026 02:38:45 +0000 (0:00:00.124) 0:00:12.266 ******* 2026-02-20 02:39:08.943485 | orchestrator | 2026-02-20 02:39:08.943496 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-20 02:39:08.943506 | orchestrator | Friday 20 February 2026 02:38:45 +0000 (0:00:00.123) 0:00:12.389 ******* 2026-02-20 02:39:08.943517 | orchestrator | 2026-02-20 02:39:08.943528 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-20 02:39:08.943538 | orchestrator | Friday 20 February 2026 02:38:45 +0000 (0:00:00.122) 0:00:12.511 ******* 2026-02-20 02:39:08.943549 | orchestrator | 2026-02-20 02:39:08.943560 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-20 02:39:08.943571 | orchestrator | Friday 20 February 2026 02:38:45 +0000 (0:00:00.121) 0:00:12.633 ******* 2026-02-20 02:39:08.943581 | orchestrator | 2026-02-20 02:39:08.943594 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-20 02:39:08.943613 | orchestrator | Friday 20 February 2026 02:38:45 +0000 (0:00:00.123) 0:00:12.756 ******* 2026-02-20 02:39:08.943626 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:39:08.943640 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:39:08.943653 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:39:08.943665 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:39:08.943677 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:39:08.943689 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:39:08.943702 | orchestrator | 2026-02-20 02:39:08.943715 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-20 02:39:08.943730 | orchestrator | Friday 20 February 2026 02:38:54 +0000 (0:00:08.554) 0:00:21.311 ******* 2026-02-20 02:39:08.943773 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:39:08.943795 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:39:08.943815 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:39:08.943828 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:39:08.943847 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:39:08.943865 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:39:08.943883 | orchestrator | 2026-02-20 02:39:08.943901 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-20 02:39:08.943918 | orchestrator | Friday 20 February 2026 02:38:55 +0000 (0:00:01.047) 0:00:22.358 ******* 2026-02-20 02:39:08.943936 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:39:08.943954 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:39:08.943972 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:39:08.943991 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:39:08.944010 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:39:08.944029 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:39:08.944040 | orchestrator | 2026-02-20 02:39:08.944051 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-20 02:39:08.944062 | orchestrator | Friday 20 February 2026 02:39:02 +0000 (0:00:07.049) 0:00:29.407 ******* 2026-02-20 02:39:08.944073 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-20 02:39:08.944084 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-20 02:39:08.944095 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-20 02:39:08.944117 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-20 02:39:08.944128 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-20 02:39:08.944139 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-20 02:39:08.944149 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-20 02:39:08.944171 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-20 02:39:21.579196 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-20 02:39:21.579289 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-20 02:39:21.579301 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-20 02:39:21.579310 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-20 02:39:21.579319 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-20 02:39:21.579329 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-20 02:39:21.579335 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-20 02:39:21.579341 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-20 02:39:21.579346 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-20 02:39:21.579351 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-20 02:39:21.579357 | orchestrator | 2026-02-20 02:39:21.579363 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-20 02:39:21.579370 | orchestrator | Friday 20 February 2026 02:39:08 +0000 (0:00:06.413) 0:00:35.821 ******* 2026-02-20 02:39:21.579376 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-20 02:39:21.579382 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:39:21.579388 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-20 02:39:21.579393 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:39:21.579398 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-20 02:39:21.579404 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:39:21.579409 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-20 02:39:21.579414 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-20 02:39:21.579420 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-20 02:39:21.579425 | orchestrator | 2026-02-20 02:39:21.579431 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-20 02:39:21.579436 | orchestrator | Friday 20 February 2026 02:39:11 +0000 (0:00:02.337) 0:00:38.158 ******* 2026-02-20 02:39:21.579454 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-20 02:39:21.579459 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:39:21.579464 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-20 02:39:21.579470 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:39:21.579475 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-20 02:39:21.579480 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:39:21.579485 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-20 02:39:21.579490 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-20 02:39:21.579510 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-20 02:39:21.579516 | orchestrator | 2026-02-20 02:39:21.579521 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-20 02:39:21.579526 | orchestrator | Friday 20 February 2026 02:39:14 +0000 (0:00:02.971) 0:00:41.129 ******* 2026-02-20 02:39:21.579531 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:39:21.579536 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:39:21.579541 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:39:21.579546 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:39:21.579551 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:39:21.579556 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:39:21.579562 | orchestrator | 2026-02-20 02:39:21.579567 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:39:21.579573 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-20 02:39:21.579579 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-20 02:39:21.579585 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-20 02:39:21.579590 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-20 02:39:21.579595 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-20 02:39:21.579600 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-20 02:39:21.579605 | orchestrator | 2026-02-20 02:39:21.579610 | orchestrator | 2026-02-20 02:39:21.579615 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:39:21.579620 | orchestrator | Friday 20 February 2026 02:39:21 +0000 (0:00:07.013) 0:00:48.142 ******* 2026-02-20 02:39:21.579637 | orchestrator | =============================================================================== 2026-02-20 02:39:21.579643 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 14.06s 2026-02-20 02:39:21.579648 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.55s 2026-02-20 02:39:21.579653 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.41s 2026-02-20 02:39:21.579658 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 2.97s 2026-02-20 02:39:21.579663 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.34s 2026-02-20 02:39:21.579668 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.28s 2026-02-20 02:39:21.579673 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.80s 2026-02-20 02:39:21.579678 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.37s 2026-02-20 02:39:21.579683 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.26s 2026-02-20 02:39:21.579688 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.05s 2026-02-20 02:39:21.579693 | orchestrator | module-load : Load modules ---------------------------------------------- 0.95s 2026-02-20 02:39:21.579698 | orchestrator | module-load : Drop module persistence ----------------------------------- 0.93s 2026-02-20 02:39:21.579703 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.89s 2026-02-20 02:39:21.579708 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.86s 2026-02-20 02:39:21.579713 | orchestrator | openvswitch : include_tasks --------------------------------------------- 0.78s 2026-02-20 02:39:21.579723 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.56s 2026-02-20 02:39:21.579728 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2026-02-20 02:39:21.579733 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-02-20 02:39:23.806089 | orchestrator | 2026-02-20 02:39:23 | INFO  | Task 4b663462-6adb-40c0-a4ae-1e7e4ceac945 (ovn) was prepared for execution. 2026-02-20 02:39:23.806159 | orchestrator | 2026-02-20 02:39:23 | INFO  | It takes a moment until task 4b663462-6adb-40c0-a4ae-1e7e4ceac945 (ovn) has been started and output is visible here. 2026-02-20 02:39:32.867854 | orchestrator | 2026-02-20 02:39:32.867990 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 02:39:32.868009 | orchestrator | 2026-02-20 02:39:32.868021 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 02:39:32.868050 | orchestrator | Friday 20 February 2026 02:39:27 +0000 (0:00:00.117) 0:00:00.117 ******* 2026-02-20 02:39:32.868061 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:39:32.868074 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:39:32.868085 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:39:32.868096 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:39:32.868106 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:39:32.868117 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:39:32.868128 | orchestrator | 2026-02-20 02:39:32.868139 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 02:39:32.868150 | orchestrator | Friday 20 February 2026 02:39:27 +0000 (0:00:00.491) 0:00:00.609 ******* 2026-02-20 02:39:32.868160 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-20 02:39:32.868172 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-20 02:39:32.868182 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-20 02:39:32.868193 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-20 02:39:32.868205 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-20 02:39:32.868216 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-20 02:39:32.868227 | orchestrator | 2026-02-20 02:39:32.868238 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-20 02:39:32.868248 | orchestrator | 2026-02-20 02:39:32.868259 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-20 02:39:32.868270 | orchestrator | Friday 20 February 2026 02:39:28 +0000 (0:00:00.701) 0:00:01.310 ******* 2026-02-20 02:39:32.868281 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:39:32.868294 | orchestrator | 2026-02-20 02:39:32.868307 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-20 02:39:32.868318 | orchestrator | Friday 20 February 2026 02:39:29 +0000 (0:00:00.897) 0:00:02.208 ******* 2026-02-20 02:39:32.868334 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:32.868349 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:32.868362 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:32.868396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:32.868409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:32.868440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:32.868452 | orchestrator | 2026-02-20 02:39:32.868463 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-20 02:39:32.868480 | orchestrator | Friday 20 February 2026 02:39:30 +0000 (0:00:01.013) 0:00:03.222 ******* 2026-02-20 02:39:32.868491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:32.868503 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:32.868514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:32.868525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:32.868536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:32.868554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:32.868565 | orchestrator | 2026-02-20 02:39:32.868576 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-20 02:39:32.868587 | orchestrator | Friday 20 February 2026 02:39:31 +0000 (0:00:01.394) 0:00:04.616 ******* 2026-02-20 02:39:32.868598 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:32.868610 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:32.868634 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:56.976333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:56.976451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:56.976469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:56.976481 | orchestrator | 2026-02-20 02:39:56.976494 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-20 02:39:56.976506 | orchestrator | Friday 20 February 2026 02:39:32 +0000 (0:00:00.959) 0:00:05.575 ******* 2026-02-20 02:39:56.976541 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:56.976554 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:56.976565 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:56.976576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:56.976587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:56.976629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:56.976642 | orchestrator | 2026-02-20 02:39:56.976654 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-20 02:39:56.976665 | orchestrator | Friday 20 February 2026 02:39:34 +0000 (0:00:01.425) 0:00:07.001 ******* 2026-02-20 02:39:56.976676 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:56.976687 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:56.976698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:56.976717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:56.976728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:56.976740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:39:56.976751 | orchestrator | 2026-02-20 02:39:56.976762 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-20 02:39:56.976773 | orchestrator | Friday 20 February 2026 02:39:35 +0000 (0:00:01.332) 0:00:08.333 ******* 2026-02-20 02:39:56.976823 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:39:56.976845 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:39:56.976867 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:39:56.976888 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:39:56.976903 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:39:56.976916 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:39:56.976929 | orchestrator | 2026-02-20 02:39:56.976942 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-20 02:39:56.976953 | orchestrator | Friday 20 February 2026 02:39:38 +0000 (0:00:02.455) 0:00:10.789 ******* 2026-02-20 02:39:56.976963 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-20 02:39:56.976975 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-20 02:39:56.976985 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-20 02:39:56.976996 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-20 02:39:56.977006 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-20 02:39:56.977023 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-20 02:39:56.977041 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-20 02:40:34.829549 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-20 02:40:34.829667 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-20 02:40:34.829682 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-20 02:40:34.829694 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-20 02:40:34.829728 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-20 02:40:34.829741 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-20 02:40:34.829754 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-20 02:40:34.829765 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-20 02:40:34.829776 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-20 02:40:34.829787 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-20 02:40:34.829797 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-20 02:40:34.829809 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-20 02:40:34.829880 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-20 02:40:34.829904 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-20 02:40:34.829923 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-20 02:40:34.829941 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-20 02:40:34.829957 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-20 02:40:34.829967 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-20 02:40:34.829978 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-20 02:40:34.829989 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-20 02:40:34.829999 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-20 02:40:34.830010 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-20 02:40:34.830093 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-20 02:40:34.830113 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-20 02:40:34.830132 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-20 02:40:34.830149 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-20 02:40:34.830166 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-20 02:40:34.830184 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-20 02:40:34.830202 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-20 02:40:34.830220 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-20 02:40:34.830238 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-20 02:40:34.830256 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-20 02:40:34.830272 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-20 02:40:34.830303 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-20 02:40:34.830322 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-20 02:40:34.830358 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-20 02:40:34.830403 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-20 02:40:34.830421 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-20 02:40:34.830433 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-20 02:40:34.830444 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-20 02:40:34.830454 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-20 02:40:34.830465 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-20 02:40:34.830475 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-20 02:40:34.830486 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-20 02:40:34.830497 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-20 02:40:34.830508 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-20 02:40:34.830518 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-20 02:40:34.830529 | orchestrator | 2026-02-20 02:40:34.830541 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-20 02:40:34.830551 | orchestrator | Friday 20 February 2026 02:39:56 +0000 (0:00:18.376) 0:00:29.165 ******* 2026-02-20 02:40:34.830562 | orchestrator | 2026-02-20 02:40:34.830573 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-20 02:40:34.830584 | orchestrator | Friday 20 February 2026 02:39:56 +0000 (0:00:00.199) 0:00:29.364 ******* 2026-02-20 02:40:34.830594 | orchestrator | 2026-02-20 02:40:34.830605 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-20 02:40:34.830616 | orchestrator | Friday 20 February 2026 02:39:56 +0000 (0:00:00.060) 0:00:29.425 ******* 2026-02-20 02:40:34.830626 | orchestrator | 2026-02-20 02:40:34.830637 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-20 02:40:34.830647 | orchestrator | Friday 20 February 2026 02:39:56 +0000 (0:00:00.061) 0:00:29.486 ******* 2026-02-20 02:40:34.830658 | orchestrator | 2026-02-20 02:40:34.830668 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-20 02:40:34.830679 | orchestrator | Friday 20 February 2026 02:39:56 +0000 (0:00:00.061) 0:00:29.547 ******* 2026-02-20 02:40:34.830689 | orchestrator | 2026-02-20 02:40:34.830700 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-20 02:40:34.830711 | orchestrator | Friday 20 February 2026 02:39:56 +0000 (0:00:00.061) 0:00:29.609 ******* 2026-02-20 02:40:34.830721 | orchestrator | 2026-02-20 02:40:34.830732 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-20 02:40:34.830751 | orchestrator | Friday 20 February 2026 02:39:56 +0000 (0:00:00.061) 0:00:29.670 ******* 2026-02-20 02:40:34.830769 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:40:34.830800 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:40:34.830818 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:40:34.830860 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:40:34.830877 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:40:34.830894 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:40:34.830911 | orchestrator | 2026-02-20 02:40:34.830929 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-20 02:40:34.830947 | orchestrator | Friday 20 February 2026 02:39:58 +0000 (0:00:01.548) 0:00:31.219 ******* 2026-02-20 02:40:34.830963 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:40:34.830982 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:40:34.830999 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:40:34.831018 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:40:34.831036 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:40:34.831054 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:40:34.831071 | orchestrator | 2026-02-20 02:40:34.831091 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-20 02:40:34.831102 | orchestrator | 2026-02-20 02:40:34.831113 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-20 02:40:34.831123 | orchestrator | Friday 20 February 2026 02:40:32 +0000 (0:00:34.259) 0:01:05.479 ******* 2026-02-20 02:40:34.831134 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:40:34.831145 | orchestrator | 2026-02-20 02:40:34.831155 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-20 02:40:34.831166 | orchestrator | Friday 20 February 2026 02:40:33 +0000 (0:00:00.614) 0:01:06.094 ******* 2026-02-20 02:40:34.831177 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:40:34.831187 | orchestrator | 2026-02-20 02:40:34.831198 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-20 02:40:34.831217 | orchestrator | Friday 20 February 2026 02:40:33 +0000 (0:00:00.495) 0:01:06.589 ******* 2026-02-20 02:40:34.831253 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:40:34.831273 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:40:34.831291 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:40:34.831308 | orchestrator | 2026-02-20 02:40:34.831326 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-20 02:40:34.831357 | orchestrator | Friday 20 February 2026 02:40:34 +0000 (0:00:00.942) 0:01:07.532 ******* 2026-02-20 02:40:45.001284 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:40:45.001431 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:40:45.001458 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:40:45.001479 | orchestrator | 2026-02-20 02:40:45.001501 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-20 02:40:45.001522 | orchestrator | Friday 20 February 2026 02:40:35 +0000 (0:00:00.298) 0:01:07.831 ******* 2026-02-20 02:40:45.001541 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:40:45.001561 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:40:45.001581 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:40:45.001622 | orchestrator | 2026-02-20 02:40:45.001657 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-20 02:40:45.001676 | orchestrator | Friday 20 February 2026 02:40:35 +0000 (0:00:00.314) 0:01:08.145 ******* 2026-02-20 02:40:45.001694 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:40:45.001712 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:40:45.001731 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:40:45.001751 | orchestrator | 2026-02-20 02:40:45.001772 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-20 02:40:45.001794 | orchestrator | Friday 20 February 2026 02:40:35 +0000 (0:00:00.313) 0:01:08.459 ******* 2026-02-20 02:40:45.001816 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:40:45.001865 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:40:45.001885 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:40:45.001906 | orchestrator | 2026-02-20 02:40:45.001963 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-20 02:40:45.001987 | orchestrator | Friday 20 February 2026 02:40:36 +0000 (0:00:00.476) 0:01:08.936 ******* 2026-02-20 02:40:45.002011 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.002115 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.002148 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.002187 | orchestrator | 2026-02-20 02:40:45.003194 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-20 02:40:45.003235 | orchestrator | Friday 20 February 2026 02:40:36 +0000 (0:00:00.341) 0:01:09.277 ******* 2026-02-20 02:40:45.003248 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.003262 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.003275 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.003287 | orchestrator | 2026-02-20 02:40:45.003300 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-20 02:40:45.003313 | orchestrator | Friday 20 February 2026 02:40:36 +0000 (0:00:00.296) 0:01:09.574 ******* 2026-02-20 02:40:45.003326 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.003338 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.003348 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.003359 | orchestrator | 2026-02-20 02:40:45.003370 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-20 02:40:45.003381 | orchestrator | Friday 20 February 2026 02:40:37 +0000 (0:00:00.260) 0:01:09.834 ******* 2026-02-20 02:40:45.003392 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.003403 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.003414 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.003425 | orchestrator | 2026-02-20 02:40:45.003436 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-20 02:40:45.003447 | orchestrator | Friday 20 February 2026 02:40:37 +0000 (0:00:00.264) 0:01:10.099 ******* 2026-02-20 02:40:45.003459 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.003469 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.003480 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.003491 | orchestrator | 2026-02-20 02:40:45.003502 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-20 02:40:45.003512 | orchestrator | Friday 20 February 2026 02:40:37 +0000 (0:00:00.425) 0:01:10.525 ******* 2026-02-20 02:40:45.003523 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.003534 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.003544 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.003555 | orchestrator | 2026-02-20 02:40:45.003566 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-20 02:40:45.003577 | orchestrator | Friday 20 February 2026 02:40:38 +0000 (0:00:00.284) 0:01:10.809 ******* 2026-02-20 02:40:45.003588 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.003598 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.003609 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.003620 | orchestrator | 2026-02-20 02:40:45.003631 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-20 02:40:45.003641 | orchestrator | Friday 20 February 2026 02:40:38 +0000 (0:00:00.272) 0:01:11.081 ******* 2026-02-20 02:40:45.003652 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.003663 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.003674 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.003684 | orchestrator | 2026-02-20 02:40:45.003695 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-20 02:40:45.003706 | orchestrator | Friday 20 February 2026 02:40:38 +0000 (0:00:00.289) 0:01:11.371 ******* 2026-02-20 02:40:45.003717 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.003728 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.003738 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.003749 | orchestrator | 2026-02-20 02:40:45.003760 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-20 02:40:45.003789 | orchestrator | Friday 20 February 2026 02:40:39 +0000 (0:00:00.429) 0:01:11.800 ******* 2026-02-20 02:40:45.003802 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.003813 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.003823 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.003870 | orchestrator | 2026-02-20 02:40:45.003883 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-20 02:40:45.003894 | orchestrator | Friday 20 February 2026 02:40:39 +0000 (0:00:00.255) 0:01:12.056 ******* 2026-02-20 02:40:45.003905 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.003930 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.003942 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.003953 | orchestrator | 2026-02-20 02:40:45.003964 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-20 02:40:45.003975 | orchestrator | Friday 20 February 2026 02:40:39 +0000 (0:00:00.272) 0:01:12.328 ******* 2026-02-20 02:40:45.004011 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.004023 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.004034 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.004045 | orchestrator | 2026-02-20 02:40:45.004056 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-20 02:40:45.004067 | orchestrator | Friday 20 February 2026 02:40:39 +0000 (0:00:00.254) 0:01:12.583 ******* 2026-02-20 02:40:45.004078 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:40:45.004089 | orchestrator | 2026-02-20 02:40:45.004100 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-20 02:40:45.004111 | orchestrator | Friday 20 February 2026 02:40:40 +0000 (0:00:00.684) 0:01:13.267 ******* 2026-02-20 02:40:45.004122 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:40:45.004134 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:40:45.004145 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:40:45.004155 | orchestrator | 2026-02-20 02:40:45.004166 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-20 02:40:45.004177 | orchestrator | Friday 20 February 2026 02:40:40 +0000 (0:00:00.403) 0:01:13.671 ******* 2026-02-20 02:40:45.004188 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:40:45.004198 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:40:45.004209 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:40:45.004220 | orchestrator | 2026-02-20 02:40:45.004231 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-20 02:40:45.004242 | orchestrator | Friday 20 February 2026 02:40:41 +0000 (0:00:00.397) 0:01:14.069 ******* 2026-02-20 02:40:45.004253 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.004263 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.004274 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.004285 | orchestrator | 2026-02-20 02:40:45.004296 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-20 02:40:45.004307 | orchestrator | Friday 20 February 2026 02:40:41 +0000 (0:00:00.309) 0:01:14.378 ******* 2026-02-20 02:40:45.004318 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.004328 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.004339 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.004350 | orchestrator | 2026-02-20 02:40:45.004361 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-20 02:40:45.004371 | orchestrator | Friday 20 February 2026 02:40:42 +0000 (0:00:00.470) 0:01:14.849 ******* 2026-02-20 02:40:45.004382 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.004393 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.004404 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.004414 | orchestrator | 2026-02-20 02:40:45.004425 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-20 02:40:45.004436 | orchestrator | Friday 20 February 2026 02:40:42 +0000 (0:00:00.297) 0:01:15.146 ******* 2026-02-20 02:40:45.004458 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.004469 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.004480 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.004491 | orchestrator | 2026-02-20 02:40:45.004502 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-20 02:40:45.004513 | orchestrator | Friday 20 February 2026 02:40:42 +0000 (0:00:00.322) 0:01:15.468 ******* 2026-02-20 02:40:45.004524 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.004535 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.004545 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.004556 | orchestrator | 2026-02-20 02:40:45.004567 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-20 02:40:45.004578 | orchestrator | Friday 20 February 2026 02:40:43 +0000 (0:00:00.287) 0:01:15.756 ******* 2026-02-20 02:40:45.004589 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:40:45.004600 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:40:45.004610 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:40:45.004621 | orchestrator | 2026-02-20 02:40:45.004632 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-20 02:40:45.004643 | orchestrator | Friday 20 February 2026 02:40:43 +0000 (0:00:00.450) 0:01:16.206 ******* 2026-02-20 02:40:45.004656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:45.004670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:45.004687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:45.004707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.964442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.964579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.964607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.964660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.964678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.964698 | orchestrator | 2026-02-20 02:40:50.964717 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-20 02:40:50.964735 | orchestrator | Friday 20 February 2026 02:40:44 +0000 (0:00:01.502) 0:01:17.708 ******* 2026-02-20 02:40:50.964755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.964775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.964794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.964832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.964935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.964957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.965025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.965046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.965065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.965084 | orchestrator | 2026-02-20 02:40:50.965102 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-20 02:40:50.965120 | orchestrator | Friday 20 February 2026 02:40:48 +0000 (0:00:03.657) 0:01:21.366 ******* 2026-02-20 02:40:50.965138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.965155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.965174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.965194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.965223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:40:50.965258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:09.741152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:09.741308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:09.741327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:09.741340 | orchestrator | 2026-02-20 02:41:09.741354 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-20 02:41:09.741367 | orchestrator | Friday 20 February 2026 02:40:50 +0000 (0:00:01.968) 0:01:23.334 ******* 2026-02-20 02:41:09.741378 | orchestrator | 2026-02-20 02:41:09.741389 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-20 02:41:09.741399 | orchestrator | Friday 20 February 2026 02:40:50 +0000 (0:00:00.059) 0:01:23.394 ******* 2026-02-20 02:41:09.741410 | orchestrator | 2026-02-20 02:41:09.741421 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-20 02:41:09.741432 | orchestrator | Friday 20 February 2026 02:40:50 +0000 (0:00:00.062) 0:01:23.457 ******* 2026-02-20 02:41:09.741443 | orchestrator | 2026-02-20 02:41:09.741454 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-20 02:41:09.741465 | orchestrator | Friday 20 February 2026 02:40:50 +0000 (0:00:00.204) 0:01:23.662 ******* 2026-02-20 02:41:09.741476 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:41:09.741489 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:41:09.741499 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:41:09.741510 | orchestrator | 2026-02-20 02:41:09.741521 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-20 02:41:09.741532 | orchestrator | Friday 20 February 2026 02:40:58 +0000 (0:00:07.502) 0:01:31.164 ******* 2026-02-20 02:41:09.741543 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:41:09.741554 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:41:09.741564 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:41:09.741575 | orchestrator | 2026-02-20 02:41:09.741586 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-20 02:41:09.741597 | orchestrator | Friday 20 February 2026 02:41:00 +0000 (0:00:02.396) 0:01:33.561 ******* 2026-02-20 02:41:09.741608 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:41:09.741619 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:41:09.741629 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:41:09.741642 | orchestrator | 2026-02-20 02:41:09.741656 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-20 02:41:09.741669 | orchestrator | Friday 20 February 2026 02:41:03 +0000 (0:00:02.417) 0:01:35.978 ******* 2026-02-20 02:41:09.741682 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:41:09.741695 | orchestrator | 2026-02-20 02:41:09.741707 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-20 02:41:09.741721 | orchestrator | Friday 20 February 2026 02:41:03 +0000 (0:00:00.117) 0:01:36.095 ******* 2026-02-20 02:41:09.741762 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:41:09.741776 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:41:09.741788 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:41:09.741800 | orchestrator | 2026-02-20 02:41:09.741813 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-20 02:41:09.741826 | orchestrator | Friday 20 February 2026 02:41:04 +0000 (0:00:00.940) 0:01:37.036 ******* 2026-02-20 02:41:09.741838 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:41:09.741851 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:41:09.741930 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:41:09.741944 | orchestrator | 2026-02-20 02:41:09.741957 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-20 02:41:09.741969 | orchestrator | Friday 20 February 2026 02:41:04 +0000 (0:00:00.611) 0:01:37.648 ******* 2026-02-20 02:41:09.741983 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:41:09.741995 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:41:09.742006 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:41:09.742082 | orchestrator | 2026-02-20 02:41:09.742097 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-20 02:41:09.742108 | orchestrator | Friday 20 February 2026 02:41:05 +0000 (0:00:00.759) 0:01:38.407 ******* 2026-02-20 02:41:09.742119 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:41:09.742130 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:41:09.742140 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:41:09.742151 | orchestrator | 2026-02-20 02:41:09.742162 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-20 02:41:09.742173 | orchestrator | Friday 20 February 2026 02:41:06 +0000 (0:00:00.608) 0:01:39.015 ******* 2026-02-20 02:41:09.742184 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:41:09.742194 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:41:09.742226 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:41:09.742238 | orchestrator | 2026-02-20 02:41:09.742249 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-20 02:41:09.742260 | orchestrator | Friday 20 February 2026 02:41:07 +0000 (0:00:00.812) 0:01:39.828 ******* 2026-02-20 02:41:09.742271 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:41:09.742282 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:41:09.742293 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:41:09.742304 | orchestrator | 2026-02-20 02:41:09.742315 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-20 02:41:09.742326 | orchestrator | Friday 20 February 2026 02:41:08 +0000 (0:00:00.950) 0:01:40.779 ******* 2026-02-20 02:41:09.742337 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:41:09.742348 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:41:09.742358 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:41:09.742369 | orchestrator | 2026-02-20 02:41:09.742380 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-20 02:41:09.742391 | orchestrator | Friday 20 February 2026 02:41:08 +0000 (0:00:00.273) 0:01:41.052 ******* 2026-02-20 02:41:09.742404 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:09.742419 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:09.742430 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:09.742452 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:09.742464 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:09.742475 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:09.742493 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:09.742505 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:09.742527 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672185 | orchestrator | 2026-02-20 02:41:16.672305 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-20 02:41:16.672321 | orchestrator | Friday 20 February 2026 02:41:09 +0000 (0:00:01.386) 0:01:42.439 ******* 2026-02-20 02:41:16.672336 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672352 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672363 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672398 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672435 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672484 | orchestrator | 2026-02-20 02:41:16.672496 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-20 02:41:16.672507 | orchestrator | Friday 20 February 2026 02:41:13 +0000 (0:00:03.734) 0:01:46.173 ******* 2026-02-20 02:41:16.672536 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672549 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672560 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672580 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672614 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 02:41:16.672653 | orchestrator | 2026-02-20 02:41:16.672664 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-20 02:41:16.672675 | orchestrator | Friday 20 February 2026 02:41:16 +0000 (0:00:02.995) 0:01:49.168 ******* 2026-02-20 02:41:16.672686 | orchestrator | 2026-02-20 02:41:16.672697 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-20 02:41:16.672709 | orchestrator | Friday 20 February 2026 02:41:16 +0000 (0:00:00.068) 0:01:49.236 ******* 2026-02-20 02:41:16.672722 | orchestrator | 2026-02-20 02:41:16.672734 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-20 02:41:16.672747 | orchestrator | Friday 20 February 2026 02:41:16 +0000 (0:00:00.061) 0:01:49.298 ******* 2026-02-20 02:41:16.672776 | orchestrator | 2026-02-20 02:41:16.672808 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-20 02:41:40.588001 | orchestrator | Friday 20 February 2026 02:41:16 +0000 (0:00:00.062) 0:01:49.361 ******* 2026-02-20 02:41:40.588119 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:41:40.588158 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:41:40.588171 | orchestrator | 2026-02-20 02:41:40.588183 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-20 02:41:40.588194 | orchestrator | Friday 20 February 2026 02:41:22 +0000 (0:00:06.138) 0:01:55.500 ******* 2026-02-20 02:41:40.588205 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:41:40.588216 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:41:40.588227 | orchestrator | 2026-02-20 02:41:40.588238 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-20 02:41:40.588248 | orchestrator | Friday 20 February 2026 02:41:28 +0000 (0:00:06.163) 0:02:01.664 ******* 2026-02-20 02:41:40.588259 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:41:40.588285 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:41:40.588308 | orchestrator | 2026-02-20 02:41:40.588319 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-20 02:41:40.588330 | orchestrator | Friday 20 February 2026 02:41:35 +0000 (0:00:06.202) 0:02:07.867 ******* 2026-02-20 02:41:40.588340 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:41:40.588351 | orchestrator | 2026-02-20 02:41:40.588362 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-20 02:41:40.588372 | orchestrator | Friday 20 February 2026 02:41:35 +0000 (0:00:00.120) 0:02:07.988 ******* 2026-02-20 02:41:40.588383 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:41:40.588395 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:41:40.588406 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:41:40.588416 | orchestrator | 2026-02-20 02:41:40.588427 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-20 02:41:40.588438 | orchestrator | Friday 20 February 2026 02:41:36 +0000 (0:00:00.987) 0:02:08.976 ******* 2026-02-20 02:41:40.588449 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:41:40.588459 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:41:40.588470 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:41:40.588482 | orchestrator | 2026-02-20 02:41:40.588493 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-20 02:41:40.588504 | orchestrator | Friday 20 February 2026 02:41:36 +0000 (0:00:00.603) 0:02:09.579 ******* 2026-02-20 02:41:40.588517 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:41:40.588528 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:41:40.588541 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:41:40.588553 | orchestrator | 2026-02-20 02:41:40.588566 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-20 02:41:40.588578 | orchestrator | Friday 20 February 2026 02:41:37 +0000 (0:00:00.807) 0:02:10.386 ******* 2026-02-20 02:41:40.588590 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:41:40.588603 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:41:40.588615 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:41:40.588627 | orchestrator | 2026-02-20 02:41:40.588639 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-20 02:41:40.588652 | orchestrator | Friday 20 February 2026 02:41:38 +0000 (0:00:00.606) 0:02:10.993 ******* 2026-02-20 02:41:40.588665 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:41:40.588677 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:41:40.588689 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:41:40.588701 | orchestrator | 2026-02-20 02:41:40.588713 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-20 02:41:40.588725 | orchestrator | Friday 20 February 2026 02:41:39 +0000 (0:00:01.098) 0:02:12.091 ******* 2026-02-20 02:41:40.588737 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:41:40.588750 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:41:40.588762 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:41:40.588774 | orchestrator | 2026-02-20 02:41:40.588786 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:41:40.588800 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-20 02:41:40.588822 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-20 02:41:40.588835 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-20 02:41:40.588864 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:41:40.588877 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:41:40.588922 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:41:40.588935 | orchestrator | 2026-02-20 02:41:40.588946 | orchestrator | 2026-02-20 02:41:40.588957 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:41:40.588968 | orchestrator | Friday 20 February 2026 02:41:40 +0000 (0:00:00.890) 0:02:12.982 ******* 2026-02-20 02:41:40.588979 | orchestrator | =============================================================================== 2026-02-20 02:41:40.588989 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.26s 2026-02-20 02:41:40.589000 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.38s 2026-02-20 02:41:40.589011 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.64s 2026-02-20 02:41:40.589022 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.62s 2026-02-20 02:41:40.589033 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.56s 2026-02-20 02:41:40.589060 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.73s 2026-02-20 02:41:40.589072 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.66s 2026-02-20 02:41:40.589083 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.00s 2026-02-20 02:41:40.589093 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.46s 2026-02-20 02:41:40.589104 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.97s 2026-02-20 02:41:40.589115 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.55s 2026-02-20 02:41:40.589125 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.50s 2026-02-20 02:41:40.589136 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.43s 2026-02-20 02:41:40.589147 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.39s 2026-02-20 02:41:40.589157 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.39s 2026-02-20 02:41:40.589168 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.33s 2026-02-20 02:41:40.589179 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.10s 2026-02-20 02:41:40.589190 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.01s 2026-02-20 02:41:40.589201 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 0.99s 2026-02-20 02:41:40.589212 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 0.96s 2026-02-20 02:41:40.852946 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-20 02:41:40.853060 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-02-20 02:41:42.896176 | orchestrator | 2026-02-20 02:41:42 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-20 02:41:53.018314 | orchestrator | 2026-02-20 02:41:53 | INFO  | Task 79b136f3-36f8-4ec6-aeb6-074c568fe6f1 (wipe-partitions) was prepared for execution. 2026-02-20 02:41:53.018430 | orchestrator | 2026-02-20 02:41:53 | INFO  | It takes a moment until task 79b136f3-36f8-4ec6-aeb6-074c568fe6f1 (wipe-partitions) has been started and output is visible here. 2026-02-20 02:42:05.086340 | orchestrator | 2026-02-20 02:42:05.086505 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-20 02:42:05.086533 | orchestrator | 2026-02-20 02:42:05.086554 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-20 02:42:05.086572 | orchestrator | Friday 20 February 2026 02:41:56 +0000 (0:00:00.104) 0:00:00.104 ******* 2026-02-20 02:42:05.086591 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:42:05.086610 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:42:05.086627 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:42:05.086645 | orchestrator | 2026-02-20 02:42:05.086663 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-20 02:42:05.086682 | orchestrator | Friday 20 February 2026 02:41:57 +0000 (0:00:00.590) 0:00:00.694 ******* 2026-02-20 02:42:05.086698 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:05.086715 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:42:05.086732 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:42:05.086750 | orchestrator | 2026-02-20 02:42:05.086768 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-20 02:42:05.086785 | orchestrator | Friday 20 February 2026 02:41:57 +0000 (0:00:00.268) 0:00:00.962 ******* 2026-02-20 02:42:05.086803 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:42:05.086820 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:42:05.086838 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:42:05.086855 | orchestrator | 2026-02-20 02:42:05.086874 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-20 02:42:05.086894 | orchestrator | Friday 20 February 2026 02:41:58 +0000 (0:00:00.556) 0:00:01.519 ******* 2026-02-20 02:42:05.086913 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:05.086961 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:42:05.086980 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:42:05.086998 | orchestrator | 2026-02-20 02:42:05.087016 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-20 02:42:05.087058 | orchestrator | Friday 20 February 2026 02:41:58 +0000 (0:00:00.238) 0:00:01.758 ******* 2026-02-20 02:42:05.087077 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-20 02:42:05.087095 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-20 02:42:05.087114 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-20 02:42:05.087133 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-20 02:42:05.087153 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-20 02:42:05.087170 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-20 02:42:05.087191 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-20 02:42:05.087208 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-20 02:42:05.087225 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-20 02:42:05.087249 | orchestrator | 2026-02-20 02:42:05.087268 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-20 02:42:05.087286 | orchestrator | Friday 20 February 2026 02:41:59 +0000 (0:00:01.294) 0:00:03.052 ******* 2026-02-20 02:42:05.087304 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-20 02:42:05.087323 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-20 02:42:05.087341 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-20 02:42:05.087360 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-20 02:42:05.087379 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-20 02:42:05.087398 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-20 02:42:05.087416 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-20 02:42:05.087434 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-20 02:42:05.087452 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-20 02:42:05.087471 | orchestrator | 2026-02-20 02:42:05.087490 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-20 02:42:05.087543 | orchestrator | Friday 20 February 2026 02:42:01 +0000 (0:00:01.653) 0:00:04.706 ******* 2026-02-20 02:42:05.087562 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-20 02:42:05.087581 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-20 02:42:05.087599 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-20 02:42:05.087617 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-20 02:42:05.087636 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-20 02:42:05.087654 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-20 02:42:05.087672 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-20 02:42:05.087691 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-20 02:42:05.087710 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-20 02:42:05.087728 | orchestrator | 2026-02-20 02:42:05.087746 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-20 02:42:05.087765 | orchestrator | Friday 20 February 2026 02:42:03 +0000 (0:00:02.181) 0:00:06.887 ******* 2026-02-20 02:42:05.087783 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:42:05.087802 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:42:05.087820 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:42:05.087838 | orchestrator | 2026-02-20 02:42:05.087857 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-20 02:42:05.087875 | orchestrator | Friday 20 February 2026 02:42:04 +0000 (0:00:00.618) 0:00:07.505 ******* 2026-02-20 02:42:05.087893 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:42:05.087911 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:42:05.087958 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:42:05.087978 | orchestrator | 2026-02-20 02:42:05.087996 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:42:05.088017 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:42:05.088037 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:42:05.088084 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:42:05.088102 | orchestrator | 2026-02-20 02:42:05.088120 | orchestrator | 2026-02-20 02:42:05.088140 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:42:05.088159 | orchestrator | Friday 20 February 2026 02:42:04 +0000 (0:00:00.656) 0:00:08.162 ******* 2026-02-20 02:42:05.088179 | orchestrator | =============================================================================== 2026-02-20 02:42:05.088199 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.18s 2026-02-20 02:42:05.088219 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.65s 2026-02-20 02:42:05.088239 | orchestrator | Check device availability ----------------------------------------------- 1.29s 2026-02-20 02:42:05.088259 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2026-02-20 02:42:05.088279 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2026-02-20 02:42:05.088299 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2026-02-20 02:42:05.088319 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.56s 2026-02-20 02:42:05.088338 | orchestrator | Remove all rook related logical devices --------------------------------- 0.27s 2026-02-20 02:42:05.088356 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-02-20 02:42:17.540528 | orchestrator | 2026-02-20 02:42:17 | INFO  | Task 4c555b92-d780-4f72-b738-441503b22e35 (facts) was prepared for execution. 2026-02-20 02:42:17.540669 | orchestrator | 2026-02-20 02:42:17 | INFO  | It takes a moment until task 4c555b92-d780-4f72-b738-441503b22e35 (facts) has been started and output is visible here. 2026-02-20 02:42:30.559788 | orchestrator | 2026-02-20 02:42:30.559875 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-20 02:42:30.559884 | orchestrator | 2026-02-20 02:42:30.559890 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-20 02:42:30.559896 | orchestrator | Friday 20 February 2026 02:42:21 +0000 (0:00:00.262) 0:00:00.262 ******* 2026-02-20 02:42:30.559902 | orchestrator | ok: [testbed-manager] 2026-02-20 02:42:30.559909 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:42:30.559915 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:42:30.559921 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:42:30.559926 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:42:30.559931 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:42:30.559937 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:42:30.559997 | orchestrator | 2026-02-20 02:42:30.560003 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-20 02:42:30.560009 | orchestrator | Friday 20 February 2026 02:42:23 +0000 (0:00:01.256) 0:00:01.519 ******* 2026-02-20 02:42:30.560015 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:42:30.560021 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:42:30.560027 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:42:30.560032 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:42:30.560037 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:30.560043 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:42:30.560048 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:42:30.560054 | orchestrator | 2026-02-20 02:42:30.560059 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-20 02:42:30.560065 | orchestrator | 2026-02-20 02:42:30.560070 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-20 02:42:30.560076 | orchestrator | Friday 20 February 2026 02:42:24 +0000 (0:00:01.307) 0:00:02.826 ******* 2026-02-20 02:42:30.560081 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:42:30.560087 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:42:30.560092 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:42:30.560097 | orchestrator | ok: [testbed-manager] 2026-02-20 02:42:30.560103 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:42:30.560108 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:42:30.560113 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:42:30.560118 | orchestrator | 2026-02-20 02:42:30.560124 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-20 02:42:30.560129 | orchestrator | 2026-02-20 02:42:30.560135 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-20 02:42:30.560140 | orchestrator | Friday 20 February 2026 02:42:29 +0000 (0:00:05.303) 0:00:08.129 ******* 2026-02-20 02:42:30.560146 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:42:30.560151 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:42:30.560156 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:42:30.560162 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:42:30.560167 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:30.560172 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:42:30.560178 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:42:30.560183 | orchestrator | 2026-02-20 02:42:30.560188 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:42:30.560194 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:42:30.560239 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:42:30.560246 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:42:30.560269 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:42:30.560275 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:42:30.560280 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:42:30.560286 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:42:30.560291 | orchestrator | 2026-02-20 02:42:30.560296 | orchestrator | 2026-02-20 02:42:30.560302 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:42:30.560307 | orchestrator | Friday 20 February 2026 02:42:30 +0000 (0:00:00.546) 0:00:08.676 ******* 2026-02-20 02:42:30.560312 | orchestrator | =============================================================================== 2026-02-20 02:42:30.560318 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.30s 2026-02-20 02:42:30.560323 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.31s 2026-02-20 02:42:30.560329 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.26s 2026-02-20 02:42:30.560334 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-02-20 02:42:32.919912 | orchestrator | 2026-02-20 02:42:32 | INFO  | Task 5c8e8a07-d84c-435a-99e0-4cc38006baba (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-20 02:42:32.920077 | orchestrator | 2026-02-20 02:42:32 | INFO  | It takes a moment until task 5c8e8a07-d84c-435a-99e0-4cc38006baba (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-20 02:42:43.707563 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-20 02:42:43.707712 | orchestrator | 2.16.14 2026-02-20 02:42:43.707731 | orchestrator | 2026-02-20 02:42:43.707744 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-20 02:42:43.707756 | orchestrator | 2026-02-20 02:42:43.707767 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-20 02:42:43.707779 | orchestrator | Friday 20 February 2026 02:42:36 +0000 (0:00:00.297) 0:00:00.297 ******* 2026-02-20 02:42:43.707791 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-20 02:42:43.707810 | orchestrator | 2026-02-20 02:42:43.707829 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-20 02:42:43.707847 | orchestrator | Friday 20 February 2026 02:42:37 +0000 (0:00:00.238) 0:00:00.536 ******* 2026-02-20 02:42:43.707865 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:42:43.707882 | orchestrator | 2026-02-20 02:42:43.707900 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:42:43.707918 | orchestrator | Friday 20 February 2026 02:42:37 +0000 (0:00:00.211) 0:00:00.747 ******* 2026-02-20 02:42:43.707936 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-20 02:42:43.707987 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-20 02:42:43.708009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-20 02:42:43.708029 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-20 02:42:43.708049 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-20 02:42:43.708061 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-20 02:42:43.708073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-20 02:42:43.708086 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-20 02:42:43.708121 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-20 02:42:43.708134 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-20 02:42:43.708146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-20 02:42:43.708158 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-20 02:42:43.708170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-20 02:42:43.708182 | orchestrator | 2026-02-20 02:42:43.708195 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:42:43.708208 | orchestrator | Friday 20 February 2026 02:42:37 +0000 (0:00:00.427) 0:00:01.175 ******* 2026-02-20 02:42:43.708220 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:43.708231 | orchestrator | 2026-02-20 02:42:43.708242 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:42:43.708253 | orchestrator | Friday 20 February 2026 02:42:37 +0000 (0:00:00.172) 0:00:01.347 ******* 2026-02-20 02:42:43.708263 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:43.708274 | orchestrator | 2026-02-20 02:42:43.708285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:42:43.708295 | orchestrator | Friday 20 February 2026 02:42:38 +0000 (0:00:00.190) 0:00:01.538 ******* 2026-02-20 02:42:43.708306 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:43.708317 | orchestrator | 2026-02-20 02:42:43.708328 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:42:43.708338 | orchestrator | Friday 20 February 2026 02:42:38 +0000 (0:00:00.191) 0:00:01.730 ******* 2026-02-20 02:42:43.708349 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:43.708360 | orchestrator | 2026-02-20 02:42:43.708370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:42:43.708381 | orchestrator | Friday 20 February 2026 02:42:38 +0000 (0:00:00.190) 0:00:01.921 ******* 2026-02-20 02:42:43.708392 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:43.708402 | orchestrator | 2026-02-20 02:42:43.708413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:42:43.708424 | orchestrator | Friday 20 February 2026 02:42:38 +0000 (0:00:00.192) 0:00:02.113 ******* 2026-02-20 02:42:43.708435 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:43.708445 | orchestrator | 2026-02-20 02:42:43.708456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:42:43.708467 | orchestrator | Friday 20 February 2026 02:42:38 +0000 (0:00:00.183) 0:00:02.297 ******* 2026-02-20 02:42:43.708478 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:43.708488 | orchestrator | 2026-02-20 02:42:43.708499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:42:43.708510 | orchestrator | Friday 20 February 2026 02:42:39 +0000 (0:00:00.170) 0:00:02.467 ******* 2026-02-20 02:42:43.708520 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:43.708531 | orchestrator | 2026-02-20 02:42:43.708542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:42:43.708552 | orchestrator | Friday 20 February 2026 02:42:39 +0000 (0:00:00.184) 0:00:02.652 ******* 2026-02-20 02:42:43.708563 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4) 2026-02-20 02:42:43.708575 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4) 2026-02-20 02:42:43.708586 | orchestrator | 2026-02-20 02:42:43.708597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:42:43.708636 | orchestrator | Friday 20 February 2026 02:42:39 +0000 (0:00:00.387) 0:00:03.039 ******* 2026-02-20 02:42:43.708648 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737) 2026-02-20 02:42:43.708667 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737) 2026-02-20 02:42:43.708679 | orchestrator | 2026-02-20 02:42:43.708689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:42:43.708700 | orchestrator | Friday 20 February 2026 02:42:40 +0000 (0:00:00.650) 0:00:03.690 ******* 2026-02-20 02:42:43.708711 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2) 2026-02-20 02:42:43.708722 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2) 2026-02-20 02:42:43.708733 | orchestrator | 2026-02-20 02:42:43.708743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:42:43.708754 | orchestrator | Friday 20 February 2026 02:42:40 +0000 (0:00:00.551) 0:00:04.241 ******* 2026-02-20 02:42:43.708765 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25) 2026-02-20 02:42:43.708776 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25) 2026-02-20 02:42:43.708787 | orchestrator | 2026-02-20 02:42:43.708798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:42:43.708808 | orchestrator | Friday 20 February 2026 02:42:41 +0000 (0:00:00.811) 0:00:05.053 ******* 2026-02-20 02:42:43.708819 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-20 02:42:43.708829 | orchestrator | 2026-02-20 02:42:43.708840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:42:43.708851 | orchestrator | Friday 20 February 2026 02:42:41 +0000 (0:00:00.322) 0:00:05.375 ******* 2026-02-20 02:42:43.708861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-20 02:42:43.708872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-20 02:42:43.708882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-20 02:42:43.708893 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-20 02:42:43.708903 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-20 02:42:43.708914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-20 02:42:43.708924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-20 02:42:43.708935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-20 02:42:43.708945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-20 02:42:43.708983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-20 02:42:43.709003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-20 02:42:43.709021 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-20 02:42:43.709039 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-20 02:42:43.709053 | orchestrator | 2026-02-20 02:42:43.709064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:42:43.709075 | orchestrator | Friday 20 February 2026 02:42:42 +0000 (0:00:00.375) 0:00:05.751 ******* 2026-02-20 02:42:43.709085 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:43.709096 | orchestrator | 2026-02-20 02:42:43.709106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:42:43.709117 | orchestrator | Friday 20 February 2026 02:42:42 +0000 (0:00:00.192) 0:00:05.944 ******* 2026-02-20 02:42:43.709128 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:43.709138 | orchestrator | 2026-02-20 02:42:43.709149 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:42:43.709167 | orchestrator | Friday 20 February 2026 02:42:42 +0000 (0:00:00.195) 0:00:06.139 ******* 2026-02-20 02:42:43.709178 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:43.709189 | orchestrator | 2026-02-20 02:42:43.709199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:42:43.709210 | orchestrator | Friday 20 February 2026 02:42:42 +0000 (0:00:00.200) 0:00:06.339 ******* 2026-02-20 02:42:43.709221 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:43.709231 | orchestrator | 2026-02-20 02:42:43.709242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:42:43.709253 | orchestrator | Friday 20 February 2026 02:42:43 +0000 (0:00:00.185) 0:00:06.524 ******* 2026-02-20 02:42:43.709263 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:43.709274 | orchestrator | 2026-02-20 02:42:43.709284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:42:43.709295 | orchestrator | Friday 20 February 2026 02:42:43 +0000 (0:00:00.188) 0:00:06.712 ******* 2026-02-20 02:42:43.709305 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:43.709316 | orchestrator | 2026-02-20 02:42:43.709327 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:42:43.709337 | orchestrator | Friday 20 February 2026 02:42:43 +0000 (0:00:00.183) 0:00:06.896 ******* 2026-02-20 02:42:43.709348 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:43.709372 | orchestrator | 2026-02-20 02:42:43.709405 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:42:51.064867 | orchestrator | Friday 20 February 2026 02:42:43 +0000 (0:00:00.180) 0:00:07.077 ******* 2026-02-20 02:42:51.065039 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.065103 | orchestrator | 2026-02-20 02:42:51.065125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:42:51.065145 | orchestrator | Friday 20 February 2026 02:42:43 +0000 (0:00:00.187) 0:00:07.265 ******* 2026-02-20 02:42:51.065164 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-20 02:42:51.065184 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-20 02:42:51.065202 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-20 02:42:51.065222 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-20 02:42:51.065241 | orchestrator | 2026-02-20 02:42:51.065262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:42:51.065282 | orchestrator | Friday 20 February 2026 02:42:44 +0000 (0:00:00.872) 0:00:08.137 ******* 2026-02-20 02:42:51.065303 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.065324 | orchestrator | 2026-02-20 02:42:51.065344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:42:51.065364 | orchestrator | Friday 20 February 2026 02:42:44 +0000 (0:00:00.185) 0:00:08.322 ******* 2026-02-20 02:42:51.065384 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.065405 | orchestrator | 2026-02-20 02:42:51.065425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:42:51.065445 | orchestrator | Friday 20 February 2026 02:42:45 +0000 (0:00:00.183) 0:00:08.506 ******* 2026-02-20 02:42:51.065465 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.065484 | orchestrator | 2026-02-20 02:42:51.065504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:42:51.065523 | orchestrator | Friday 20 February 2026 02:42:45 +0000 (0:00:00.192) 0:00:08.698 ******* 2026-02-20 02:42:51.065545 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.065567 | orchestrator | 2026-02-20 02:42:51.065588 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-20 02:42:51.065610 | orchestrator | Friday 20 February 2026 02:42:45 +0000 (0:00:00.199) 0:00:08.898 ******* 2026-02-20 02:42:51.065633 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-20 02:42:51.065652 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-20 02:42:51.065708 | orchestrator | 2026-02-20 02:42:51.065731 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-20 02:42:51.065752 | orchestrator | Friday 20 February 2026 02:42:45 +0000 (0:00:00.186) 0:00:09.084 ******* 2026-02-20 02:42:51.065770 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.065789 | orchestrator | 2026-02-20 02:42:51.065811 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-20 02:42:51.065831 | orchestrator | Friday 20 February 2026 02:42:45 +0000 (0:00:00.140) 0:00:09.225 ******* 2026-02-20 02:42:51.065850 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.065870 | orchestrator | 2026-02-20 02:42:51.065889 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-20 02:42:51.065908 | orchestrator | Friday 20 February 2026 02:42:45 +0000 (0:00:00.137) 0:00:09.362 ******* 2026-02-20 02:42:51.065926 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.065943 | orchestrator | 2026-02-20 02:42:51.066151 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-20 02:42:51.066187 | orchestrator | Friday 20 February 2026 02:42:46 +0000 (0:00:00.147) 0:00:09.509 ******* 2026-02-20 02:42:51.066205 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:42:51.066223 | orchestrator | 2026-02-20 02:42:51.066242 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-20 02:42:51.066260 | orchestrator | Friday 20 February 2026 02:42:46 +0000 (0:00:00.143) 0:00:09.653 ******* 2026-02-20 02:42:51.066278 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '59fbb122-dcd4-5ddb-8fde-378adfe4b14f'}}) 2026-02-20 02:42:51.066297 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dc3a4123-87de-5eee-bc1c-01eb52a96fe2'}}) 2026-02-20 02:42:51.066315 | orchestrator | 2026-02-20 02:42:51.066333 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-20 02:42:51.066352 | orchestrator | Friday 20 February 2026 02:42:46 +0000 (0:00:00.164) 0:00:09.818 ******* 2026-02-20 02:42:51.066371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '59fbb122-dcd4-5ddb-8fde-378adfe4b14f'}})  2026-02-20 02:42:51.066391 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dc3a4123-87de-5eee-bc1c-01eb52a96fe2'}})  2026-02-20 02:42:51.066408 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.066426 | orchestrator | 2026-02-20 02:42:51.066445 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-20 02:42:51.066463 | orchestrator | Friday 20 February 2026 02:42:46 +0000 (0:00:00.324) 0:00:10.143 ******* 2026-02-20 02:42:51.066481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '59fbb122-dcd4-5ddb-8fde-378adfe4b14f'}})  2026-02-20 02:42:51.066500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dc3a4123-87de-5eee-bc1c-01eb52a96fe2'}})  2026-02-20 02:42:51.066519 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.066537 | orchestrator | 2026-02-20 02:42:51.066555 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-20 02:42:51.066570 | orchestrator | Friday 20 February 2026 02:42:46 +0000 (0:00:00.158) 0:00:10.302 ******* 2026-02-20 02:42:51.066581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '59fbb122-dcd4-5ddb-8fde-378adfe4b14f'}})  2026-02-20 02:42:51.066643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dc3a4123-87de-5eee-bc1c-01eb52a96fe2'}})  2026-02-20 02:42:51.066665 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.066683 | orchestrator | 2026-02-20 02:42:51.066701 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-20 02:42:51.066720 | orchestrator | Friday 20 February 2026 02:42:47 +0000 (0:00:00.154) 0:00:10.456 ******* 2026-02-20 02:42:51.066738 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:42:51.066756 | orchestrator | 2026-02-20 02:42:51.066794 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-20 02:42:51.066814 | orchestrator | Friday 20 February 2026 02:42:47 +0000 (0:00:00.141) 0:00:10.598 ******* 2026-02-20 02:42:51.066826 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:42:51.066836 | orchestrator | 2026-02-20 02:42:51.066847 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-20 02:42:51.066858 | orchestrator | Friday 20 February 2026 02:42:47 +0000 (0:00:00.144) 0:00:10.743 ******* 2026-02-20 02:42:51.066868 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.066879 | orchestrator | 2026-02-20 02:42:51.066889 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-20 02:42:51.066900 | orchestrator | Friday 20 February 2026 02:42:47 +0000 (0:00:00.140) 0:00:10.883 ******* 2026-02-20 02:42:51.066910 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.066921 | orchestrator | 2026-02-20 02:42:51.066932 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-20 02:42:51.066942 | orchestrator | Friday 20 February 2026 02:42:47 +0000 (0:00:00.132) 0:00:11.015 ******* 2026-02-20 02:42:51.066953 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.066998 | orchestrator | 2026-02-20 02:42:51.067011 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-20 02:42:51.067021 | orchestrator | Friday 20 February 2026 02:42:47 +0000 (0:00:00.144) 0:00:11.159 ******* 2026-02-20 02:42:51.067032 | orchestrator | ok: [testbed-node-3] => { 2026-02-20 02:42:51.067042 | orchestrator |  "ceph_osd_devices": { 2026-02-20 02:42:51.067054 | orchestrator |  "sdb": { 2026-02-20 02:42:51.067064 | orchestrator |  "osd_lvm_uuid": "59fbb122-dcd4-5ddb-8fde-378adfe4b14f" 2026-02-20 02:42:51.067075 | orchestrator |  }, 2026-02-20 02:42:51.067086 | orchestrator |  "sdc": { 2026-02-20 02:42:51.067097 | orchestrator |  "osd_lvm_uuid": "dc3a4123-87de-5eee-bc1c-01eb52a96fe2" 2026-02-20 02:42:51.067107 | orchestrator |  } 2026-02-20 02:42:51.067118 | orchestrator |  } 2026-02-20 02:42:51.067129 | orchestrator | } 2026-02-20 02:42:51.067140 | orchestrator | 2026-02-20 02:42:51.067151 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-20 02:42:51.067161 | orchestrator | Friday 20 February 2026 02:42:47 +0000 (0:00:00.148) 0:00:11.308 ******* 2026-02-20 02:42:51.067172 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.067183 | orchestrator | 2026-02-20 02:42:51.067193 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-20 02:42:51.067204 | orchestrator | Friday 20 February 2026 02:42:48 +0000 (0:00:00.134) 0:00:11.442 ******* 2026-02-20 02:42:51.067214 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.067225 | orchestrator | 2026-02-20 02:42:51.067236 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-20 02:42:51.067246 | orchestrator | Friday 20 February 2026 02:42:48 +0000 (0:00:00.145) 0:00:11.588 ******* 2026-02-20 02:42:51.067257 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:42:51.067268 | orchestrator | 2026-02-20 02:42:51.067278 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-20 02:42:51.067289 | orchestrator | Friday 20 February 2026 02:42:48 +0000 (0:00:00.137) 0:00:11.725 ******* 2026-02-20 02:42:51.067298 | orchestrator | changed: [testbed-node-3] => { 2026-02-20 02:42:51.067308 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-20 02:42:51.067318 | orchestrator |  "ceph_osd_devices": { 2026-02-20 02:42:51.067328 | orchestrator |  "sdb": { 2026-02-20 02:42:51.067345 | orchestrator |  "osd_lvm_uuid": "59fbb122-dcd4-5ddb-8fde-378adfe4b14f" 2026-02-20 02:42:51.067361 | orchestrator |  }, 2026-02-20 02:42:51.067377 | orchestrator |  "sdc": { 2026-02-20 02:42:51.067394 | orchestrator |  "osd_lvm_uuid": "dc3a4123-87de-5eee-bc1c-01eb52a96fe2" 2026-02-20 02:42:51.067412 | orchestrator |  } 2026-02-20 02:42:51.067428 | orchestrator |  }, 2026-02-20 02:42:51.067444 | orchestrator |  "lvm_volumes": [ 2026-02-20 02:42:51.067468 | orchestrator |  { 2026-02-20 02:42:51.067479 | orchestrator |  "data": "osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f", 2026-02-20 02:42:51.067488 | orchestrator |  "data_vg": "ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f" 2026-02-20 02:42:51.067502 | orchestrator |  }, 2026-02-20 02:42:51.067517 | orchestrator |  { 2026-02-20 02:42:51.067534 | orchestrator |  "data": "osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2", 2026-02-20 02:42:51.067551 | orchestrator |  "data_vg": "ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2" 2026-02-20 02:42:51.067568 | orchestrator |  } 2026-02-20 02:42:51.067585 | orchestrator |  ] 2026-02-20 02:42:51.067601 | orchestrator |  } 2026-02-20 02:42:51.067617 | orchestrator | } 2026-02-20 02:42:51.067628 | orchestrator | 2026-02-20 02:42:51.067637 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-20 02:42:51.067647 | orchestrator | Friday 20 February 2026 02:42:48 +0000 (0:00:00.409) 0:00:12.134 ******* 2026-02-20 02:42:51.067656 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-20 02:42:51.067666 | orchestrator | 2026-02-20 02:42:51.067675 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-20 02:42:51.067685 | orchestrator | 2026-02-20 02:42:51.067694 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-20 02:42:51.067704 | orchestrator | Friday 20 February 2026 02:42:50 +0000 (0:00:01.805) 0:00:13.940 ******* 2026-02-20 02:42:51.067713 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-20 02:42:51.067723 | orchestrator | 2026-02-20 02:42:51.067733 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-20 02:42:51.067742 | orchestrator | Friday 20 February 2026 02:42:50 +0000 (0:00:00.259) 0:00:14.200 ******* 2026-02-20 02:42:51.067752 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:42:51.067762 | orchestrator | 2026-02-20 02:42:51.067789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:00.028187 | orchestrator | Friday 20 February 2026 02:42:51 +0000 (0:00:00.238) 0:00:14.439 ******* 2026-02-20 02:43:00.028297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-20 02:43:00.028315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-20 02:43:00.028327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-20 02:43:00.028339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-20 02:43:00.028350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-20 02:43:00.028362 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-20 02:43:00.028373 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-20 02:43:00.028384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-20 02:43:00.028395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-20 02:43:00.028406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-20 02:43:00.028417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-20 02:43:00.028428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-20 02:43:00.028439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-20 02:43:00.028451 | orchestrator | 2026-02-20 02:43:00.028463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:00.028474 | orchestrator | Friday 20 February 2026 02:42:51 +0000 (0:00:00.377) 0:00:14.816 ******* 2026-02-20 02:43:00.028486 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:00.028530 | orchestrator | 2026-02-20 02:43:00.028558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:00.028581 | orchestrator | Friday 20 February 2026 02:42:51 +0000 (0:00:00.196) 0:00:15.012 ******* 2026-02-20 02:43:00.028599 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:00.028618 | orchestrator | 2026-02-20 02:43:00.028636 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:00.028653 | orchestrator | Friday 20 February 2026 02:42:51 +0000 (0:00:00.201) 0:00:15.214 ******* 2026-02-20 02:43:00.028670 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:00.028688 | orchestrator | 2026-02-20 02:43:00.028708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:00.028728 | orchestrator | Friday 20 February 2026 02:42:52 +0000 (0:00:00.204) 0:00:15.418 ******* 2026-02-20 02:43:00.028748 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:00.028767 | orchestrator | 2026-02-20 02:43:00.028786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:00.028806 | orchestrator | Friday 20 February 2026 02:42:52 +0000 (0:00:00.611) 0:00:16.029 ******* 2026-02-20 02:43:00.028827 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:00.028848 | orchestrator | 2026-02-20 02:43:00.028868 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:00.028885 | orchestrator | Friday 20 February 2026 02:42:52 +0000 (0:00:00.206) 0:00:16.236 ******* 2026-02-20 02:43:00.028898 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:00.028911 | orchestrator | 2026-02-20 02:43:00.028922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:00.028933 | orchestrator | Friday 20 February 2026 02:42:53 +0000 (0:00:00.207) 0:00:16.444 ******* 2026-02-20 02:43:00.028944 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:00.028955 | orchestrator | 2026-02-20 02:43:00.028988 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:00.029001 | orchestrator | Friday 20 February 2026 02:42:53 +0000 (0:00:00.210) 0:00:16.654 ******* 2026-02-20 02:43:00.029012 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:00.029022 | orchestrator | 2026-02-20 02:43:00.029033 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:00.029044 | orchestrator | Friday 20 February 2026 02:42:53 +0000 (0:00:00.208) 0:00:16.863 ******* 2026-02-20 02:43:00.029056 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178) 2026-02-20 02:43:00.029068 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178) 2026-02-20 02:43:00.029079 | orchestrator | 2026-02-20 02:43:00.029090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:00.029101 | orchestrator | Friday 20 February 2026 02:42:53 +0000 (0:00:00.422) 0:00:17.286 ******* 2026-02-20 02:43:00.029112 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6) 2026-02-20 02:43:00.029123 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6) 2026-02-20 02:43:00.029133 | orchestrator | 2026-02-20 02:43:00.029144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:00.029155 | orchestrator | Friday 20 February 2026 02:42:54 +0000 (0:00:00.423) 0:00:17.710 ******* 2026-02-20 02:43:00.029166 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289) 2026-02-20 02:43:00.029177 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289) 2026-02-20 02:43:00.029188 | orchestrator | 2026-02-20 02:43:00.029214 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:00.029245 | orchestrator | Friday 20 February 2026 02:42:54 +0000 (0:00:00.432) 0:00:18.142 ******* 2026-02-20 02:43:00.029257 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca) 2026-02-20 02:43:00.029279 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca) 2026-02-20 02:43:00.029291 | orchestrator | 2026-02-20 02:43:00.029302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:00.029312 | orchestrator | Friday 20 February 2026 02:42:55 +0000 (0:00:00.634) 0:00:18.776 ******* 2026-02-20 02:43:00.029323 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-20 02:43:00.029334 | orchestrator | 2026-02-20 02:43:00.029345 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:00.029355 | orchestrator | Friday 20 February 2026 02:42:55 +0000 (0:00:00.549) 0:00:19.325 ******* 2026-02-20 02:43:00.029366 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-20 02:43:00.029377 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-20 02:43:00.029388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-20 02:43:00.029399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-20 02:43:00.029410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-20 02:43:00.029420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-20 02:43:00.029431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-20 02:43:00.029442 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-20 02:43:00.029453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-20 02:43:00.029464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-20 02:43:00.029474 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-20 02:43:00.029485 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-20 02:43:00.029496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-20 02:43:00.029507 | orchestrator | 2026-02-20 02:43:00.029517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:00.029528 | orchestrator | Friday 20 February 2026 02:42:56 +0000 (0:00:00.804) 0:00:20.130 ******* 2026-02-20 02:43:00.029539 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:00.029549 | orchestrator | 2026-02-20 02:43:00.029560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:00.029571 | orchestrator | Friday 20 February 2026 02:42:56 +0000 (0:00:00.204) 0:00:20.335 ******* 2026-02-20 02:43:00.029581 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:00.029613 | orchestrator | 2026-02-20 02:43:00.029624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:00.029635 | orchestrator | Friday 20 February 2026 02:42:57 +0000 (0:00:00.206) 0:00:20.542 ******* 2026-02-20 02:43:00.029646 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:00.029657 | orchestrator | 2026-02-20 02:43:00.029668 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:00.029679 | orchestrator | Friday 20 February 2026 02:42:57 +0000 (0:00:00.214) 0:00:20.757 ******* 2026-02-20 02:43:00.029690 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:00.029701 | orchestrator | 2026-02-20 02:43:00.029712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:00.029723 | orchestrator | Friday 20 February 2026 02:42:57 +0000 (0:00:00.204) 0:00:20.961 ******* 2026-02-20 02:43:00.029734 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:00.029745 | orchestrator | 2026-02-20 02:43:00.029756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:00.029774 | orchestrator | Friday 20 February 2026 02:42:57 +0000 (0:00:00.214) 0:00:21.175 ******* 2026-02-20 02:43:00.029785 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:00.029796 | orchestrator | 2026-02-20 02:43:00.029807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:00.029818 | orchestrator | Friday 20 February 2026 02:42:58 +0000 (0:00:00.210) 0:00:21.386 ******* 2026-02-20 02:43:00.029829 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:00.029840 | orchestrator | 2026-02-20 02:43:00.029850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:00.029861 | orchestrator | Friday 20 February 2026 02:42:58 +0000 (0:00:00.220) 0:00:21.606 ******* 2026-02-20 02:43:00.029872 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:00.029883 | orchestrator | 2026-02-20 02:43:00.029894 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:00.029905 | orchestrator | Friday 20 February 2026 02:42:58 +0000 (0:00:00.205) 0:00:21.811 ******* 2026-02-20 02:43:00.029916 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-20 02:43:00.029928 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-20 02:43:00.029939 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-20 02:43:00.029950 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-20 02:43:00.029961 | orchestrator | 2026-02-20 02:43:00.029992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:00.030009 | orchestrator | Friday 20 February 2026 02:42:59 +0000 (0:00:00.919) 0:00:22.731 ******* 2026-02-20 02:43:00.030080 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:06.079283 | orchestrator | 2026-02-20 02:43:06.079362 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:06.079369 | orchestrator | Friday 20 February 2026 02:43:00 +0000 (0:00:00.671) 0:00:23.402 ******* 2026-02-20 02:43:06.079374 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:06.079379 | orchestrator | 2026-02-20 02:43:06.079383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:06.079388 | orchestrator | Friday 20 February 2026 02:43:00 +0000 (0:00:00.226) 0:00:23.628 ******* 2026-02-20 02:43:06.079392 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:06.079396 | orchestrator | 2026-02-20 02:43:06.079400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:06.079403 | orchestrator | Friday 20 February 2026 02:43:00 +0000 (0:00:00.214) 0:00:23.843 ******* 2026-02-20 02:43:06.079407 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:06.079411 | orchestrator | 2026-02-20 02:43:06.079415 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-20 02:43:06.079418 | orchestrator | Friday 20 February 2026 02:43:00 +0000 (0:00:00.221) 0:00:24.065 ******* 2026-02-20 02:43:06.079422 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-20 02:43:06.079427 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-20 02:43:06.079430 | orchestrator | 2026-02-20 02:43:06.079434 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-20 02:43:06.079438 | orchestrator | Friday 20 February 2026 02:43:00 +0000 (0:00:00.187) 0:00:24.253 ******* 2026-02-20 02:43:06.079442 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:06.079445 | orchestrator | 2026-02-20 02:43:06.079449 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-20 02:43:06.079453 | orchestrator | Friday 20 February 2026 02:43:01 +0000 (0:00:00.137) 0:00:24.390 ******* 2026-02-20 02:43:06.079457 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:06.079460 | orchestrator | 2026-02-20 02:43:06.079464 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-20 02:43:06.079468 | orchestrator | Friday 20 February 2026 02:43:01 +0000 (0:00:00.148) 0:00:24.539 ******* 2026-02-20 02:43:06.079472 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:06.079489 | orchestrator | 2026-02-20 02:43:06.079493 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-20 02:43:06.079497 | orchestrator | Friday 20 February 2026 02:43:01 +0000 (0:00:00.131) 0:00:24.671 ******* 2026-02-20 02:43:06.079501 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:43:06.079505 | orchestrator | 2026-02-20 02:43:06.079509 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-20 02:43:06.079514 | orchestrator | Friday 20 February 2026 02:43:01 +0000 (0:00:00.148) 0:00:24.820 ******* 2026-02-20 02:43:06.079518 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ad1d47ce-3300-5f5f-a456-60212d7294ef'}}) 2026-02-20 02:43:06.079522 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'}}) 2026-02-20 02:43:06.079526 | orchestrator | 2026-02-20 02:43:06.079530 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-20 02:43:06.079533 | orchestrator | Friday 20 February 2026 02:43:01 +0000 (0:00:00.164) 0:00:24.984 ******* 2026-02-20 02:43:06.079538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ad1d47ce-3300-5f5f-a456-60212d7294ef'}})  2026-02-20 02:43:06.079542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'}})  2026-02-20 02:43:06.079546 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:06.079549 | orchestrator | 2026-02-20 02:43:06.079553 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-20 02:43:06.079557 | orchestrator | Friday 20 February 2026 02:43:01 +0000 (0:00:00.155) 0:00:25.139 ******* 2026-02-20 02:43:06.079561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ad1d47ce-3300-5f5f-a456-60212d7294ef'}})  2026-02-20 02:43:06.079564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'}})  2026-02-20 02:43:06.079568 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:06.079572 | orchestrator | 2026-02-20 02:43:06.079576 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-20 02:43:06.079579 | orchestrator | Friday 20 February 2026 02:43:02 +0000 (0:00:00.372) 0:00:25.512 ******* 2026-02-20 02:43:06.079583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ad1d47ce-3300-5f5f-a456-60212d7294ef'}})  2026-02-20 02:43:06.079587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'}})  2026-02-20 02:43:06.079591 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:06.079595 | orchestrator | 2026-02-20 02:43:06.079598 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-20 02:43:06.079602 | orchestrator | Friday 20 February 2026 02:43:02 +0000 (0:00:00.165) 0:00:25.677 ******* 2026-02-20 02:43:06.079606 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:43:06.079610 | orchestrator | 2026-02-20 02:43:06.079613 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-20 02:43:06.079617 | orchestrator | Friday 20 February 2026 02:43:02 +0000 (0:00:00.154) 0:00:25.831 ******* 2026-02-20 02:43:06.079621 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:43:06.079624 | orchestrator | 2026-02-20 02:43:06.079628 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-20 02:43:06.079643 | orchestrator | Friday 20 February 2026 02:43:02 +0000 (0:00:00.153) 0:00:25.985 ******* 2026-02-20 02:43:06.079659 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:06.079663 | orchestrator | 2026-02-20 02:43:06.079667 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-20 02:43:06.079671 | orchestrator | Friday 20 February 2026 02:43:02 +0000 (0:00:00.146) 0:00:26.131 ******* 2026-02-20 02:43:06.079674 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:06.079678 | orchestrator | 2026-02-20 02:43:06.079682 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-20 02:43:06.079689 | orchestrator | Friday 20 February 2026 02:43:02 +0000 (0:00:00.149) 0:00:26.280 ******* 2026-02-20 02:43:06.079693 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:06.079696 | orchestrator | 2026-02-20 02:43:06.079700 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-20 02:43:06.079704 | orchestrator | Friday 20 February 2026 02:43:03 +0000 (0:00:00.142) 0:00:26.423 ******* 2026-02-20 02:43:06.079708 | orchestrator | ok: [testbed-node-4] => { 2026-02-20 02:43:06.079712 | orchestrator |  "ceph_osd_devices": { 2026-02-20 02:43:06.079716 | orchestrator |  "sdb": { 2026-02-20 02:43:06.079720 | orchestrator |  "osd_lvm_uuid": "ad1d47ce-3300-5f5f-a456-60212d7294ef" 2026-02-20 02:43:06.079724 | orchestrator |  }, 2026-02-20 02:43:06.079728 | orchestrator |  "sdc": { 2026-02-20 02:43:06.079731 | orchestrator |  "osd_lvm_uuid": "5fdd3cdc-a96e-5423-81ac-d20dc4add6fd" 2026-02-20 02:43:06.079735 | orchestrator |  } 2026-02-20 02:43:06.079739 | orchestrator |  } 2026-02-20 02:43:06.079743 | orchestrator | } 2026-02-20 02:43:06.079747 | orchestrator | 2026-02-20 02:43:06.079751 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-20 02:43:06.079755 | orchestrator | Friday 20 February 2026 02:43:03 +0000 (0:00:00.148) 0:00:26.571 ******* 2026-02-20 02:43:06.079758 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:06.079762 | orchestrator | 2026-02-20 02:43:06.079766 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-20 02:43:06.079770 | orchestrator | Friday 20 February 2026 02:43:03 +0000 (0:00:00.144) 0:00:26.716 ******* 2026-02-20 02:43:06.079773 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:06.079777 | orchestrator | 2026-02-20 02:43:06.079781 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-20 02:43:06.079785 | orchestrator | Friday 20 February 2026 02:43:03 +0000 (0:00:00.139) 0:00:26.856 ******* 2026-02-20 02:43:06.079789 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:43:06.079792 | orchestrator | 2026-02-20 02:43:06.079796 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-20 02:43:06.079800 | orchestrator | Friday 20 February 2026 02:43:03 +0000 (0:00:00.129) 0:00:26.985 ******* 2026-02-20 02:43:06.079804 | orchestrator | changed: [testbed-node-4] => { 2026-02-20 02:43:06.079808 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-20 02:43:06.079811 | orchestrator |  "ceph_osd_devices": { 2026-02-20 02:43:06.079815 | orchestrator |  "sdb": { 2026-02-20 02:43:06.079819 | orchestrator |  "osd_lvm_uuid": "ad1d47ce-3300-5f5f-a456-60212d7294ef" 2026-02-20 02:43:06.079823 | orchestrator |  }, 2026-02-20 02:43:06.079827 | orchestrator |  "sdc": { 2026-02-20 02:43:06.079831 | orchestrator |  "osd_lvm_uuid": "5fdd3cdc-a96e-5423-81ac-d20dc4add6fd" 2026-02-20 02:43:06.079835 | orchestrator |  } 2026-02-20 02:43:06.079838 | orchestrator |  }, 2026-02-20 02:43:06.079842 | orchestrator |  "lvm_volumes": [ 2026-02-20 02:43:06.079846 | orchestrator |  { 2026-02-20 02:43:06.079850 | orchestrator |  "data": "osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef", 2026-02-20 02:43:06.079855 | orchestrator |  "data_vg": "ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef" 2026-02-20 02:43:06.079859 | orchestrator |  }, 2026-02-20 02:43:06.079864 | orchestrator |  { 2026-02-20 02:43:06.079868 | orchestrator |  "data": "osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd", 2026-02-20 02:43:06.079873 | orchestrator |  "data_vg": "ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd" 2026-02-20 02:43:06.079877 | orchestrator |  } 2026-02-20 02:43:06.079882 | orchestrator |  ] 2026-02-20 02:43:06.079886 | orchestrator |  } 2026-02-20 02:43:06.079890 | orchestrator | } 2026-02-20 02:43:06.079895 | orchestrator | 2026-02-20 02:43:06.079899 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-20 02:43:06.079907 | orchestrator | Friday 20 February 2026 02:43:04 +0000 (0:00:00.411) 0:00:27.397 ******* 2026-02-20 02:43:06.079911 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-20 02:43:06.079916 | orchestrator | 2026-02-20 02:43:06.079920 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-20 02:43:06.079924 | orchestrator | 2026-02-20 02:43:06.079929 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-20 02:43:06.079934 | orchestrator | Friday 20 February 2026 02:43:05 +0000 (0:00:01.164) 0:00:28.562 ******* 2026-02-20 02:43:06.079938 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-20 02:43:06.079942 | orchestrator | 2026-02-20 02:43:06.079947 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-20 02:43:06.079951 | orchestrator | Friday 20 February 2026 02:43:05 +0000 (0:00:00.267) 0:00:28.829 ******* 2026-02-20 02:43:06.079956 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:43:06.079960 | orchestrator | 2026-02-20 02:43:06.079965 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:06.079969 | orchestrator | Friday 20 February 2026 02:43:05 +0000 (0:00:00.235) 0:00:29.065 ******* 2026-02-20 02:43:06.079991 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-20 02:43:06.079996 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-20 02:43:06.080000 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-20 02:43:06.080005 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-20 02:43:06.080010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-20 02:43:06.080020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-20 02:43:14.722616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-20 02:43:14.722721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-20 02:43:14.722737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-20 02:43:14.722749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-20 02:43:14.722760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-20 02:43:14.722771 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-20 02:43:14.722782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-20 02:43:14.722794 | orchestrator | 2026-02-20 02:43:14.722806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:14.722818 | orchestrator | Friday 20 February 2026 02:43:06 +0000 (0:00:00.383) 0:00:29.449 ******* 2026-02-20 02:43:14.722829 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.722842 | orchestrator | 2026-02-20 02:43:14.722853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:14.722864 | orchestrator | Friday 20 February 2026 02:43:06 +0000 (0:00:00.216) 0:00:29.665 ******* 2026-02-20 02:43:14.722875 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.722886 | orchestrator | 2026-02-20 02:43:14.722897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:14.722908 | orchestrator | Friday 20 February 2026 02:43:06 +0000 (0:00:00.190) 0:00:29.856 ******* 2026-02-20 02:43:14.722919 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.722930 | orchestrator | 2026-02-20 02:43:14.722941 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:14.722952 | orchestrator | Friday 20 February 2026 02:43:06 +0000 (0:00:00.199) 0:00:30.056 ******* 2026-02-20 02:43:14.722963 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.723024 | orchestrator | 2026-02-20 02:43:14.723037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:14.723048 | orchestrator | Friday 20 February 2026 02:43:07 +0000 (0:00:00.686) 0:00:30.742 ******* 2026-02-20 02:43:14.723059 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.723070 | orchestrator | 2026-02-20 02:43:14.723081 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:14.723091 | orchestrator | Friday 20 February 2026 02:43:07 +0000 (0:00:00.219) 0:00:30.961 ******* 2026-02-20 02:43:14.723102 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.723113 | orchestrator | 2026-02-20 02:43:14.723124 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:14.723134 | orchestrator | Friday 20 February 2026 02:43:07 +0000 (0:00:00.210) 0:00:31.172 ******* 2026-02-20 02:43:14.723146 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.723158 | orchestrator | 2026-02-20 02:43:14.723171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:14.723183 | orchestrator | Friday 20 February 2026 02:43:07 +0000 (0:00:00.205) 0:00:31.377 ******* 2026-02-20 02:43:14.723196 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.723208 | orchestrator | 2026-02-20 02:43:14.723220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:14.723233 | orchestrator | Friday 20 February 2026 02:43:08 +0000 (0:00:00.218) 0:00:31.596 ******* 2026-02-20 02:43:14.723245 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c) 2026-02-20 02:43:14.723259 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c) 2026-02-20 02:43:14.723271 | orchestrator | 2026-02-20 02:43:14.723283 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:14.723295 | orchestrator | Friday 20 February 2026 02:43:08 +0000 (0:00:00.421) 0:00:32.017 ******* 2026-02-20 02:43:14.723308 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57) 2026-02-20 02:43:14.723320 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57) 2026-02-20 02:43:14.723332 | orchestrator | 2026-02-20 02:43:14.723345 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:14.723358 | orchestrator | Friday 20 February 2026 02:43:09 +0000 (0:00:00.421) 0:00:32.439 ******* 2026-02-20 02:43:14.723371 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9) 2026-02-20 02:43:14.723383 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9) 2026-02-20 02:43:14.723397 | orchestrator | 2026-02-20 02:43:14.723409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:14.723421 | orchestrator | Friday 20 February 2026 02:43:09 +0000 (0:00:00.418) 0:00:32.858 ******* 2026-02-20 02:43:14.723434 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8) 2026-02-20 02:43:14.723446 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8) 2026-02-20 02:43:14.723460 | orchestrator | 2026-02-20 02:43:14.723472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:43:14.723485 | orchestrator | Friday 20 February 2026 02:43:09 +0000 (0:00:00.448) 0:00:33.306 ******* 2026-02-20 02:43:14.723497 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-20 02:43:14.723509 | orchestrator | 2026-02-20 02:43:14.723534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:14.723563 | orchestrator | Friday 20 February 2026 02:43:10 +0000 (0:00:00.368) 0:00:33.675 ******* 2026-02-20 02:43:14.723575 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-20 02:43:14.723586 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-20 02:43:14.723605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-20 02:43:14.723616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-20 02:43:14.723627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-20 02:43:14.723638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-20 02:43:14.723648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-20 02:43:14.723659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-20 02:43:14.723670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-20 02:43:14.723681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-20 02:43:14.723691 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-20 02:43:14.723702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-20 02:43:14.723713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-20 02:43:14.723723 | orchestrator | 2026-02-20 02:43:14.723734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:14.723745 | orchestrator | Friday 20 February 2026 02:43:10 +0000 (0:00:00.618) 0:00:34.294 ******* 2026-02-20 02:43:14.723756 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.723767 | orchestrator | 2026-02-20 02:43:14.723778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:14.723789 | orchestrator | Friday 20 February 2026 02:43:11 +0000 (0:00:00.212) 0:00:34.506 ******* 2026-02-20 02:43:14.723799 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.723810 | orchestrator | 2026-02-20 02:43:14.723821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:14.723832 | orchestrator | Friday 20 February 2026 02:43:11 +0000 (0:00:00.206) 0:00:34.713 ******* 2026-02-20 02:43:14.723842 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.723853 | orchestrator | 2026-02-20 02:43:14.723865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:14.723875 | orchestrator | Friday 20 February 2026 02:43:11 +0000 (0:00:00.201) 0:00:34.914 ******* 2026-02-20 02:43:14.723886 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.723897 | orchestrator | 2026-02-20 02:43:14.723908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:14.723919 | orchestrator | Friday 20 February 2026 02:43:11 +0000 (0:00:00.209) 0:00:35.124 ******* 2026-02-20 02:43:14.723930 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.723941 | orchestrator | 2026-02-20 02:43:14.723952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:14.723963 | orchestrator | Friday 20 February 2026 02:43:11 +0000 (0:00:00.203) 0:00:35.327 ******* 2026-02-20 02:43:14.723974 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.724015 | orchestrator | 2026-02-20 02:43:14.724027 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:14.724038 | orchestrator | Friday 20 February 2026 02:43:12 +0000 (0:00:00.208) 0:00:35.536 ******* 2026-02-20 02:43:14.724049 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.724059 | orchestrator | 2026-02-20 02:43:14.724070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:14.724081 | orchestrator | Friday 20 February 2026 02:43:12 +0000 (0:00:00.219) 0:00:35.755 ******* 2026-02-20 02:43:14.724092 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.724103 | orchestrator | 2026-02-20 02:43:14.724113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:14.724132 | orchestrator | Friday 20 February 2026 02:43:12 +0000 (0:00:00.198) 0:00:35.953 ******* 2026-02-20 02:43:14.724143 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-20 02:43:14.724154 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-20 02:43:14.724165 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-20 02:43:14.724176 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-20 02:43:14.724187 | orchestrator | 2026-02-20 02:43:14.724197 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:14.724208 | orchestrator | Friday 20 February 2026 02:43:13 +0000 (0:00:00.874) 0:00:36.828 ******* 2026-02-20 02:43:14.724219 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.724230 | orchestrator | 2026-02-20 02:43:14.724240 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:14.724251 | orchestrator | Friday 20 February 2026 02:43:13 +0000 (0:00:00.202) 0:00:37.030 ******* 2026-02-20 02:43:14.724262 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.724273 | orchestrator | 2026-02-20 02:43:14.724283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:14.724294 | orchestrator | Friday 20 February 2026 02:43:13 +0000 (0:00:00.207) 0:00:37.238 ******* 2026-02-20 02:43:14.724305 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.724316 | orchestrator | 2026-02-20 02:43:14.724327 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:43:14.724337 | orchestrator | Friday 20 February 2026 02:43:14 +0000 (0:00:00.640) 0:00:37.879 ******* 2026-02-20 02:43:14.724354 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:14.724365 | orchestrator | 2026-02-20 02:43:14.724382 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-20 02:43:18.672358 | orchestrator | Friday 20 February 2026 02:43:14 +0000 (0:00:00.218) 0:00:38.097 ******* 2026-02-20 02:43:18.672440 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-20 02:43:18.672449 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-20 02:43:18.672455 | orchestrator | 2026-02-20 02:43:18.672461 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-20 02:43:18.672467 | orchestrator | Friday 20 February 2026 02:43:14 +0000 (0:00:00.174) 0:00:38.272 ******* 2026-02-20 02:43:18.672473 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:18.672478 | orchestrator | 2026-02-20 02:43:18.672484 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-20 02:43:18.672489 | orchestrator | Friday 20 February 2026 02:43:15 +0000 (0:00:00.137) 0:00:38.410 ******* 2026-02-20 02:43:18.672495 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:18.672500 | orchestrator | 2026-02-20 02:43:18.672505 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-20 02:43:18.672510 | orchestrator | Friday 20 February 2026 02:43:15 +0000 (0:00:00.130) 0:00:38.540 ******* 2026-02-20 02:43:18.672515 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:18.672520 | orchestrator | 2026-02-20 02:43:18.672526 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-20 02:43:18.672531 | orchestrator | Friday 20 February 2026 02:43:15 +0000 (0:00:00.130) 0:00:38.670 ******* 2026-02-20 02:43:18.672536 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:43:18.672542 | orchestrator | 2026-02-20 02:43:18.672548 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-20 02:43:18.672553 | orchestrator | Friday 20 February 2026 02:43:15 +0000 (0:00:00.153) 0:00:38.824 ******* 2026-02-20 02:43:18.672558 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'}}) 2026-02-20 02:43:18.672564 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5fe77357-4c85-56ab-aabd-7cb5a18434f2'}}) 2026-02-20 02:43:18.672569 | orchestrator | 2026-02-20 02:43:18.672574 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-20 02:43:18.672599 | orchestrator | Friday 20 February 2026 02:43:15 +0000 (0:00:00.163) 0:00:38.988 ******* 2026-02-20 02:43:18.672605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'}})  2026-02-20 02:43:18.672612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5fe77357-4c85-56ab-aabd-7cb5a18434f2'}})  2026-02-20 02:43:18.672617 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:18.672622 | orchestrator | 2026-02-20 02:43:18.672628 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-20 02:43:18.672633 | orchestrator | Friday 20 February 2026 02:43:15 +0000 (0:00:00.141) 0:00:39.130 ******* 2026-02-20 02:43:18.672638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'}})  2026-02-20 02:43:18.672643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5fe77357-4c85-56ab-aabd-7cb5a18434f2'}})  2026-02-20 02:43:18.672648 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:18.672654 | orchestrator | 2026-02-20 02:43:18.672659 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-20 02:43:18.672664 | orchestrator | Friday 20 February 2026 02:43:15 +0000 (0:00:00.148) 0:00:39.278 ******* 2026-02-20 02:43:18.672669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'}})  2026-02-20 02:43:18.672674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5fe77357-4c85-56ab-aabd-7cb5a18434f2'}})  2026-02-20 02:43:18.672680 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:18.672685 | orchestrator | 2026-02-20 02:43:18.672690 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-20 02:43:18.672695 | orchestrator | Friday 20 February 2026 02:43:16 +0000 (0:00:00.141) 0:00:39.420 ******* 2026-02-20 02:43:18.672700 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:43:18.672706 | orchestrator | 2026-02-20 02:43:18.672711 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-20 02:43:18.672716 | orchestrator | Friday 20 February 2026 02:43:16 +0000 (0:00:00.135) 0:00:39.555 ******* 2026-02-20 02:43:18.672721 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:43:18.672726 | orchestrator | 2026-02-20 02:43:18.672732 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-20 02:43:18.672737 | orchestrator | Friday 20 February 2026 02:43:16 +0000 (0:00:00.324) 0:00:39.879 ******* 2026-02-20 02:43:18.672742 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:18.672747 | orchestrator | 2026-02-20 02:43:18.672753 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-20 02:43:18.672758 | orchestrator | Friday 20 February 2026 02:43:16 +0000 (0:00:00.147) 0:00:40.026 ******* 2026-02-20 02:43:18.672763 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:18.672768 | orchestrator | 2026-02-20 02:43:18.672773 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-20 02:43:18.672778 | orchestrator | Friday 20 February 2026 02:43:16 +0000 (0:00:00.136) 0:00:40.163 ******* 2026-02-20 02:43:18.672784 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:18.672789 | orchestrator | 2026-02-20 02:43:18.672794 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-20 02:43:18.672799 | orchestrator | Friday 20 February 2026 02:43:16 +0000 (0:00:00.139) 0:00:40.302 ******* 2026-02-20 02:43:18.672816 | orchestrator | ok: [testbed-node-5] => { 2026-02-20 02:43:18.672821 | orchestrator |  "ceph_osd_devices": { 2026-02-20 02:43:18.672827 | orchestrator |  "sdb": { 2026-02-20 02:43:18.672842 | orchestrator |  "osd_lvm_uuid": "9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae" 2026-02-20 02:43:18.672848 | orchestrator |  }, 2026-02-20 02:43:18.672854 | orchestrator |  "sdc": { 2026-02-20 02:43:18.672859 | orchestrator |  "osd_lvm_uuid": "5fe77357-4c85-56ab-aabd-7cb5a18434f2" 2026-02-20 02:43:18.672868 | orchestrator |  } 2026-02-20 02:43:18.672873 | orchestrator |  } 2026-02-20 02:43:18.672879 | orchestrator | } 2026-02-20 02:43:18.672884 | orchestrator | 2026-02-20 02:43:18.672889 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-20 02:43:18.672895 | orchestrator | Friday 20 February 2026 02:43:17 +0000 (0:00:00.144) 0:00:40.446 ******* 2026-02-20 02:43:18.672900 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:18.672905 | orchestrator | 2026-02-20 02:43:18.672910 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-20 02:43:18.672915 | orchestrator | Friday 20 February 2026 02:43:17 +0000 (0:00:00.127) 0:00:40.574 ******* 2026-02-20 02:43:18.672921 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:18.672927 | orchestrator | 2026-02-20 02:43:18.672932 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-20 02:43:18.672938 | orchestrator | Friday 20 February 2026 02:43:17 +0000 (0:00:00.155) 0:00:40.730 ******* 2026-02-20 02:43:18.672944 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:43:18.672950 | orchestrator | 2026-02-20 02:43:18.672956 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-20 02:43:18.672961 | orchestrator | Friday 20 February 2026 02:43:17 +0000 (0:00:00.132) 0:00:40.862 ******* 2026-02-20 02:43:18.672967 | orchestrator | changed: [testbed-node-5] => { 2026-02-20 02:43:18.672973 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-20 02:43:18.672979 | orchestrator |  "ceph_osd_devices": { 2026-02-20 02:43:18.673015 | orchestrator |  "sdb": { 2026-02-20 02:43:18.673022 | orchestrator |  "osd_lvm_uuid": "9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae" 2026-02-20 02:43:18.673028 | orchestrator |  }, 2026-02-20 02:43:18.673034 | orchestrator |  "sdc": { 2026-02-20 02:43:18.673040 | orchestrator |  "osd_lvm_uuid": "5fe77357-4c85-56ab-aabd-7cb5a18434f2" 2026-02-20 02:43:18.673046 | orchestrator |  } 2026-02-20 02:43:18.673052 | orchestrator |  }, 2026-02-20 02:43:18.673058 | orchestrator |  "lvm_volumes": [ 2026-02-20 02:43:18.673063 | orchestrator |  { 2026-02-20 02:43:18.673069 | orchestrator |  "data": "osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae", 2026-02-20 02:43:18.673075 | orchestrator |  "data_vg": "ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae" 2026-02-20 02:43:18.673081 | orchestrator |  }, 2026-02-20 02:43:18.673087 | orchestrator |  { 2026-02-20 02:43:18.673093 | orchestrator |  "data": "osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2", 2026-02-20 02:43:18.673098 | orchestrator |  "data_vg": "ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2" 2026-02-20 02:43:18.673104 | orchestrator |  } 2026-02-20 02:43:18.673110 | orchestrator |  ] 2026-02-20 02:43:18.673116 | orchestrator |  } 2026-02-20 02:43:18.673122 | orchestrator | } 2026-02-20 02:43:18.673127 | orchestrator | 2026-02-20 02:43:18.673133 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-20 02:43:18.673139 | orchestrator | Friday 20 February 2026 02:43:17 +0000 (0:00:00.202) 0:00:41.064 ******* 2026-02-20 02:43:18.673145 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-20 02:43:18.673150 | orchestrator | 2026-02-20 02:43:18.673156 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:43:18.673162 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-20 02:43:18.673169 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-20 02:43:18.673175 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-20 02:43:18.673181 | orchestrator | 2026-02-20 02:43:18.673191 | orchestrator | 2026-02-20 02:43:18.673197 | orchestrator | 2026-02-20 02:43:18.673203 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:43:18.673208 | orchestrator | Friday 20 February 2026 02:43:18 +0000 (0:00:00.976) 0:00:42.040 ******* 2026-02-20 02:43:18.673214 | orchestrator | =============================================================================== 2026-02-20 02:43:18.673220 | orchestrator | Write configuration file ------------------------------------------------ 3.95s 2026-02-20 02:43:18.673226 | orchestrator | Add known partitions to the list of available block devices ------------- 1.80s 2026-02-20 02:43:18.673232 | orchestrator | Add known links to the list of available block devices ------------------ 1.19s 2026-02-20 02:43:18.673238 | orchestrator | Print configuration data ------------------------------------------------ 1.02s 2026-02-20 02:43:18.673244 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2026-02-20 02:43:18.673249 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-02-20 02:43:18.673256 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-02-20 02:43:18.673261 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2026-02-20 02:43:18.673267 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2026-02-20 02:43:18.673273 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2026-02-20 02:43:18.673279 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-02-20 02:43:18.673285 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.68s 2026-02-20 02:43:18.673294 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-02-20 02:43:18.673304 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-02-20 02:43:19.032185 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2026-02-20 02:43:19.032293 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-02-20 02:43:19.032308 | orchestrator | Set OSD devices config data --------------------------------------------- 0.62s 2026-02-20 02:43:19.032320 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.62s 2026-02-20 02:43:19.032331 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-02-20 02:43:19.032343 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2026-02-20 02:43:41.602609 | orchestrator | 2026-02-20 02:43:41 | INFO  | Task 39d5d505-1c82-418e-9b64-876367a6868f (sync inventory) is running in background. Output coming soon. 2026-02-20 02:44:06.330575 | orchestrator | 2026-02-20 02:43:43 | INFO  | Starting group_vars file reorganization 2026-02-20 02:44:06.330717 | orchestrator | 2026-02-20 02:43:43 | INFO  | Moved 0 file(s) to their respective directories 2026-02-20 02:44:06.330745 | orchestrator | 2026-02-20 02:43:43 | INFO  | Group_vars file reorganization completed 2026-02-20 02:44:06.330765 | orchestrator | 2026-02-20 02:43:45 | INFO  | Starting variable preparation from inventory 2026-02-20 02:44:06.330784 | orchestrator | 2026-02-20 02:43:48 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-20 02:44:06.330804 | orchestrator | 2026-02-20 02:43:48 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-20 02:44:06.330822 | orchestrator | 2026-02-20 02:43:48 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-20 02:44:06.330841 | orchestrator | 2026-02-20 02:43:48 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-20 02:44:06.330860 | orchestrator | 2026-02-20 02:43:48 | INFO  | Variable preparation completed 2026-02-20 02:44:06.330879 | orchestrator | 2026-02-20 02:43:49 | INFO  | Starting inventory overwrite handling 2026-02-20 02:44:06.330934 | orchestrator | 2026-02-20 02:43:49 | INFO  | Handling group overwrites in 99-overwrite 2026-02-20 02:44:06.330954 | orchestrator | 2026-02-20 02:43:49 | INFO  | Removing group frr:children from 60-generic 2026-02-20 02:44:06.330974 | orchestrator | 2026-02-20 02:43:49 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-20 02:44:06.330992 | orchestrator | 2026-02-20 02:43:49 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-20 02:44:06.331011 | orchestrator | 2026-02-20 02:43:49 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-20 02:44:06.331083 | orchestrator | 2026-02-20 02:43:49 | INFO  | Handling group overwrites in 20-roles 2026-02-20 02:44:06.331106 | orchestrator | 2026-02-20 02:43:49 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-20 02:44:06.331125 | orchestrator | 2026-02-20 02:43:49 | INFO  | Removed 5 group(s) in total 2026-02-20 02:44:06.331144 | orchestrator | 2026-02-20 02:43:49 | INFO  | Inventory overwrite handling completed 2026-02-20 02:44:06.331165 | orchestrator | 2026-02-20 02:43:50 | INFO  | Starting merge of inventory files 2026-02-20 02:44:06.331183 | orchestrator | 2026-02-20 02:43:50 | INFO  | Inventory files merged successfully 2026-02-20 02:44:06.331202 | orchestrator | 2026-02-20 02:43:55 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-20 02:44:06.331223 | orchestrator | 2026-02-20 02:44:05 | INFO  | Successfully wrote ClusterShell configuration 2026-02-20 02:44:06.331243 | orchestrator | [master 474b6c2] 2026-02-20-02-44 2026-02-20 02:44:06.331265 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-20 02:44:08.299526 | orchestrator | 2026-02-20 02:44:08 | INFO  | Task 027fb93b-2c3c-43af-b585-4422196f9c5b (ceph-create-lvm-devices) was prepared for execution. 2026-02-20 02:44:08.299633 | orchestrator | 2026-02-20 02:44:08 | INFO  | It takes a moment until task 027fb93b-2c3c-43af-b585-4422196f9c5b (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-20 02:44:18.455785 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-20 02:44:18.455899 | orchestrator | 2.16.14 2026-02-20 02:44:18.455917 | orchestrator | 2026-02-20 02:44:18.455929 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-20 02:44:18.455941 | orchestrator | 2026-02-20 02:44:18.455953 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-20 02:44:18.455964 | orchestrator | Friday 20 February 2026 02:44:11 +0000 (0:00:00.226) 0:00:00.226 ******* 2026-02-20 02:44:18.455976 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-20 02:44:18.455987 | orchestrator | 2026-02-20 02:44:18.455998 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-20 02:44:18.456025 | orchestrator | Friday 20 February 2026 02:44:11 +0000 (0:00:00.222) 0:00:00.449 ******* 2026-02-20 02:44:18.456094 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:44:18.456108 | orchestrator | 2026-02-20 02:44:18.456119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:18.456130 | orchestrator | Friday 20 February 2026 02:44:12 +0000 (0:00:00.210) 0:00:00.659 ******* 2026-02-20 02:44:18.456141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-20 02:44:18.456166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-20 02:44:18.456190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-20 02:44:18.456201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-20 02:44:18.456212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-20 02:44:18.456223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-20 02:44:18.456256 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-20 02:44:18.456268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-20 02:44:18.456279 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-20 02:44:18.456289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-20 02:44:18.456300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-20 02:44:18.456311 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-20 02:44:18.456324 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-20 02:44:18.456337 | orchestrator | 2026-02-20 02:44:18.456349 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:18.456361 | orchestrator | Friday 20 February 2026 02:44:12 +0000 (0:00:00.416) 0:00:01.076 ******* 2026-02-20 02:44:18.456374 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:18.456387 | orchestrator | 2026-02-20 02:44:18.456400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:18.456413 | orchestrator | Friday 20 February 2026 02:44:12 +0000 (0:00:00.190) 0:00:01.266 ******* 2026-02-20 02:44:18.456426 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:18.456438 | orchestrator | 2026-02-20 02:44:18.456451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:18.456463 | orchestrator | Friday 20 February 2026 02:44:12 +0000 (0:00:00.189) 0:00:01.456 ******* 2026-02-20 02:44:18.456476 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:18.456488 | orchestrator | 2026-02-20 02:44:18.456501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:18.456514 | orchestrator | Friday 20 February 2026 02:44:13 +0000 (0:00:00.184) 0:00:01.641 ******* 2026-02-20 02:44:18.456526 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:18.456538 | orchestrator | 2026-02-20 02:44:18.456551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:18.456563 | orchestrator | Friday 20 February 2026 02:44:13 +0000 (0:00:00.175) 0:00:01.816 ******* 2026-02-20 02:44:18.456576 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:18.456589 | orchestrator | 2026-02-20 02:44:18.456602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:18.456614 | orchestrator | Friday 20 February 2026 02:44:13 +0000 (0:00:00.168) 0:00:01.985 ******* 2026-02-20 02:44:18.456627 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:18.456639 | orchestrator | 2026-02-20 02:44:18.456652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:18.456664 | orchestrator | Friday 20 February 2026 02:44:13 +0000 (0:00:00.168) 0:00:02.154 ******* 2026-02-20 02:44:18.456677 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:18.456688 | orchestrator | 2026-02-20 02:44:18.456698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:18.456709 | orchestrator | Friday 20 February 2026 02:44:13 +0000 (0:00:00.188) 0:00:02.343 ******* 2026-02-20 02:44:18.456720 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:18.456730 | orchestrator | 2026-02-20 02:44:18.456741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:18.456752 | orchestrator | Friday 20 February 2026 02:44:13 +0000 (0:00:00.188) 0:00:02.531 ******* 2026-02-20 02:44:18.456763 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4) 2026-02-20 02:44:18.456775 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4) 2026-02-20 02:44:18.456786 | orchestrator | 2026-02-20 02:44:18.456797 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:18.456834 | orchestrator | Friday 20 February 2026 02:44:14 +0000 (0:00:00.388) 0:00:02.919 ******* 2026-02-20 02:44:18.456846 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737) 2026-02-20 02:44:18.456857 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737) 2026-02-20 02:44:18.456868 | orchestrator | 2026-02-20 02:44:18.456879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:18.456890 | orchestrator | Friday 20 February 2026 02:44:14 +0000 (0:00:00.528) 0:00:03.448 ******* 2026-02-20 02:44:18.456901 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2) 2026-02-20 02:44:18.456918 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2) 2026-02-20 02:44:18.456930 | orchestrator | 2026-02-20 02:44:18.456940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:18.456951 | orchestrator | Friday 20 February 2026 02:44:15 +0000 (0:00:00.545) 0:00:03.993 ******* 2026-02-20 02:44:18.456962 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25) 2026-02-20 02:44:18.456973 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25) 2026-02-20 02:44:18.456984 | orchestrator | 2026-02-20 02:44:18.456995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:18.457006 | orchestrator | Friday 20 February 2026 02:44:16 +0000 (0:00:00.835) 0:00:04.828 ******* 2026-02-20 02:44:18.457017 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-20 02:44:18.457027 | orchestrator | 2026-02-20 02:44:18.457057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:18.457068 | orchestrator | Friday 20 February 2026 02:44:16 +0000 (0:00:00.340) 0:00:05.169 ******* 2026-02-20 02:44:18.457079 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-20 02:44:18.457090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-20 02:44:18.457101 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-20 02:44:18.457112 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-20 02:44:18.457122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-20 02:44:18.457133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-20 02:44:18.457144 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-20 02:44:18.457155 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-20 02:44:18.457165 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-20 02:44:18.457176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-20 02:44:18.457187 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-20 02:44:18.457197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-20 02:44:18.457208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-20 02:44:18.457219 | orchestrator | 2026-02-20 02:44:18.457230 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:18.457241 | orchestrator | Friday 20 February 2026 02:44:16 +0000 (0:00:00.410) 0:00:05.580 ******* 2026-02-20 02:44:18.457252 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:18.457262 | orchestrator | 2026-02-20 02:44:18.457273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:18.457292 | orchestrator | Friday 20 February 2026 02:44:17 +0000 (0:00:00.204) 0:00:05.785 ******* 2026-02-20 02:44:18.457302 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:18.457313 | orchestrator | 2026-02-20 02:44:18.457324 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:18.457335 | orchestrator | Friday 20 February 2026 02:44:17 +0000 (0:00:00.202) 0:00:05.987 ******* 2026-02-20 02:44:18.457346 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:18.457356 | orchestrator | 2026-02-20 02:44:18.457367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:18.457378 | orchestrator | Friday 20 February 2026 02:44:17 +0000 (0:00:00.201) 0:00:06.188 ******* 2026-02-20 02:44:18.457389 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:18.457399 | orchestrator | 2026-02-20 02:44:18.457410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:18.457421 | orchestrator | Friday 20 February 2026 02:44:17 +0000 (0:00:00.201) 0:00:06.390 ******* 2026-02-20 02:44:18.457432 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:18.457442 | orchestrator | 2026-02-20 02:44:18.457453 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:18.457464 | orchestrator | Friday 20 February 2026 02:44:18 +0000 (0:00:00.201) 0:00:06.592 ******* 2026-02-20 02:44:18.457474 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:18.457485 | orchestrator | 2026-02-20 02:44:18.457496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:18.457507 | orchestrator | Friday 20 February 2026 02:44:18 +0000 (0:00:00.213) 0:00:06.806 ******* 2026-02-20 02:44:18.457518 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:18.457529 | orchestrator | 2026-02-20 02:44:18.457546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:26.475039 | orchestrator | Friday 20 February 2026 02:44:18 +0000 (0:00:00.225) 0:00:07.031 ******* 2026-02-20 02:44:26.475212 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.475228 | orchestrator | 2026-02-20 02:44:26.475241 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:26.475253 | orchestrator | Friday 20 February 2026 02:44:19 +0000 (0:00:00.625) 0:00:07.657 ******* 2026-02-20 02:44:26.475264 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-20 02:44:26.475275 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-20 02:44:26.475287 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-20 02:44:26.475297 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-20 02:44:26.475308 | orchestrator | 2026-02-20 02:44:26.475335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:26.475347 | orchestrator | Friday 20 February 2026 02:44:19 +0000 (0:00:00.638) 0:00:08.295 ******* 2026-02-20 02:44:26.475358 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.475369 | orchestrator | 2026-02-20 02:44:26.475380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:26.475391 | orchestrator | Friday 20 February 2026 02:44:19 +0000 (0:00:00.203) 0:00:08.499 ******* 2026-02-20 02:44:26.475402 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.475413 | orchestrator | 2026-02-20 02:44:26.475424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:26.475435 | orchestrator | Friday 20 February 2026 02:44:20 +0000 (0:00:00.199) 0:00:08.698 ******* 2026-02-20 02:44:26.475446 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.475457 | orchestrator | 2026-02-20 02:44:26.475468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:26.475479 | orchestrator | Friday 20 February 2026 02:44:20 +0000 (0:00:00.196) 0:00:08.895 ******* 2026-02-20 02:44:26.475489 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.475500 | orchestrator | 2026-02-20 02:44:26.475511 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-20 02:44:26.475522 | orchestrator | Friday 20 February 2026 02:44:20 +0000 (0:00:00.201) 0:00:09.096 ******* 2026-02-20 02:44:26.475555 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.475566 | orchestrator | 2026-02-20 02:44:26.475577 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-20 02:44:26.475588 | orchestrator | Friday 20 February 2026 02:44:20 +0000 (0:00:00.137) 0:00:09.233 ******* 2026-02-20 02:44:26.475599 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '59fbb122-dcd4-5ddb-8fde-378adfe4b14f'}}) 2026-02-20 02:44:26.475612 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dc3a4123-87de-5eee-bc1c-01eb52a96fe2'}}) 2026-02-20 02:44:26.475623 | orchestrator | 2026-02-20 02:44:26.475634 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-20 02:44:26.475645 | orchestrator | Friday 20 February 2026 02:44:20 +0000 (0:00:00.190) 0:00:09.423 ******* 2026-02-20 02:44:26.475657 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'}) 2026-02-20 02:44:26.475669 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'}) 2026-02-20 02:44:26.475680 | orchestrator | 2026-02-20 02:44:26.475691 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-20 02:44:26.475702 | orchestrator | Friday 20 February 2026 02:44:22 +0000 (0:00:02.004) 0:00:11.428 ******* 2026-02-20 02:44:26.475712 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:26.475724 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:26.475735 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.475746 | orchestrator | 2026-02-20 02:44:26.475757 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-20 02:44:26.475768 | orchestrator | Friday 20 February 2026 02:44:23 +0000 (0:00:00.157) 0:00:11.586 ******* 2026-02-20 02:44:26.475779 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'}) 2026-02-20 02:44:26.475790 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'}) 2026-02-20 02:44:26.475800 | orchestrator | 2026-02-20 02:44:26.475811 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-20 02:44:26.475822 | orchestrator | Friday 20 February 2026 02:44:24 +0000 (0:00:01.467) 0:00:13.054 ******* 2026-02-20 02:44:26.475833 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:26.475843 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:26.475854 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.475865 | orchestrator | 2026-02-20 02:44:26.475876 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-20 02:44:26.475887 | orchestrator | Friday 20 February 2026 02:44:24 +0000 (0:00:00.154) 0:00:13.208 ******* 2026-02-20 02:44:26.475939 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.475951 | orchestrator | 2026-02-20 02:44:26.475962 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-20 02:44:26.475973 | orchestrator | Friday 20 February 2026 02:44:24 +0000 (0:00:00.326) 0:00:13.535 ******* 2026-02-20 02:44:26.475984 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:26.476009 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:26.476020 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.476031 | orchestrator | 2026-02-20 02:44:26.476063 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-20 02:44:26.476077 | orchestrator | Friday 20 February 2026 02:44:25 +0000 (0:00:00.159) 0:00:13.694 ******* 2026-02-20 02:44:26.476088 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.476098 | orchestrator | 2026-02-20 02:44:26.476109 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-20 02:44:26.476119 | orchestrator | Friday 20 February 2026 02:44:25 +0000 (0:00:00.140) 0:00:13.835 ******* 2026-02-20 02:44:26.476130 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:26.476141 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:26.476152 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.476162 | orchestrator | 2026-02-20 02:44:26.476173 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-20 02:44:26.476184 | orchestrator | Friday 20 February 2026 02:44:25 +0000 (0:00:00.150) 0:00:13.985 ******* 2026-02-20 02:44:26.476194 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.476205 | orchestrator | 2026-02-20 02:44:26.476216 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-20 02:44:26.476229 | orchestrator | Friday 20 February 2026 02:44:25 +0000 (0:00:00.137) 0:00:14.123 ******* 2026-02-20 02:44:26.476248 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:26.476266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:26.476297 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.476317 | orchestrator | 2026-02-20 02:44:26.476334 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-20 02:44:26.476351 | orchestrator | Friday 20 February 2026 02:44:25 +0000 (0:00:00.158) 0:00:14.282 ******* 2026-02-20 02:44:26.476368 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:44:26.476386 | orchestrator | 2026-02-20 02:44:26.476404 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-20 02:44:26.476423 | orchestrator | Friday 20 February 2026 02:44:25 +0000 (0:00:00.140) 0:00:14.423 ******* 2026-02-20 02:44:26.476441 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:26.476460 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:26.476479 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.476497 | orchestrator | 2026-02-20 02:44:26.476516 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-20 02:44:26.476535 | orchestrator | Friday 20 February 2026 02:44:26 +0000 (0:00:00.169) 0:00:14.592 ******* 2026-02-20 02:44:26.476554 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:26.476573 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:26.476593 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.476610 | orchestrator | 2026-02-20 02:44:26.476630 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-20 02:44:26.476661 | orchestrator | Friday 20 February 2026 02:44:26 +0000 (0:00:00.153) 0:00:14.746 ******* 2026-02-20 02:44:26.476680 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:26.476697 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:26.476716 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.476734 | orchestrator | 2026-02-20 02:44:26.476753 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-20 02:44:26.476772 | orchestrator | Friday 20 February 2026 02:44:26 +0000 (0:00:00.158) 0:00:14.905 ******* 2026-02-20 02:44:26.476791 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:26.476810 | orchestrator | 2026-02-20 02:44:26.476828 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-20 02:44:26.476858 | orchestrator | Friday 20 February 2026 02:44:26 +0000 (0:00:00.143) 0:00:15.048 ******* 2026-02-20 02:44:33.187350 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.187456 | orchestrator | 2026-02-20 02:44:33.187472 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-20 02:44:33.187486 | orchestrator | Friday 20 February 2026 02:44:26 +0000 (0:00:00.151) 0:00:15.199 ******* 2026-02-20 02:44:33.187497 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.187508 | orchestrator | 2026-02-20 02:44:33.187520 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-20 02:44:33.187531 | orchestrator | Friday 20 February 2026 02:44:26 +0000 (0:00:00.331) 0:00:15.531 ******* 2026-02-20 02:44:33.187541 | orchestrator | ok: [testbed-node-3] => { 2026-02-20 02:44:33.187569 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-20 02:44:33.187580 | orchestrator | } 2026-02-20 02:44:33.187592 | orchestrator | 2026-02-20 02:44:33.187602 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-20 02:44:33.187613 | orchestrator | Friday 20 February 2026 02:44:27 +0000 (0:00:00.148) 0:00:15.679 ******* 2026-02-20 02:44:33.187624 | orchestrator | ok: [testbed-node-3] => { 2026-02-20 02:44:33.187635 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-20 02:44:33.187646 | orchestrator | } 2026-02-20 02:44:33.187685 | orchestrator | 2026-02-20 02:44:33.187699 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-20 02:44:33.187710 | orchestrator | Friday 20 February 2026 02:44:27 +0000 (0:00:00.145) 0:00:15.825 ******* 2026-02-20 02:44:33.187720 | orchestrator | ok: [testbed-node-3] => { 2026-02-20 02:44:33.187732 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-20 02:44:33.187743 | orchestrator | } 2026-02-20 02:44:33.187754 | orchestrator | 2026-02-20 02:44:33.187765 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-20 02:44:33.187776 | orchestrator | Friday 20 February 2026 02:44:27 +0000 (0:00:00.142) 0:00:15.967 ******* 2026-02-20 02:44:33.187787 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:44:33.187798 | orchestrator | 2026-02-20 02:44:33.187809 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-20 02:44:33.187820 | orchestrator | Friday 20 February 2026 02:44:28 +0000 (0:00:00.680) 0:00:16.648 ******* 2026-02-20 02:44:33.187831 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:44:33.187842 | orchestrator | 2026-02-20 02:44:33.187853 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-20 02:44:33.187864 | orchestrator | Friday 20 February 2026 02:44:28 +0000 (0:00:00.535) 0:00:17.183 ******* 2026-02-20 02:44:33.187876 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:44:33.187889 | orchestrator | 2026-02-20 02:44:33.187902 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-20 02:44:33.187914 | orchestrator | Friday 20 February 2026 02:44:29 +0000 (0:00:00.543) 0:00:17.727 ******* 2026-02-20 02:44:33.187950 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:44:33.187962 | orchestrator | 2026-02-20 02:44:33.187975 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-20 02:44:33.187987 | orchestrator | Friday 20 February 2026 02:44:29 +0000 (0:00:00.149) 0:00:17.877 ******* 2026-02-20 02:44:33.187999 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188011 | orchestrator | 2026-02-20 02:44:33.188023 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-20 02:44:33.188036 | orchestrator | Friday 20 February 2026 02:44:29 +0000 (0:00:00.109) 0:00:17.987 ******* 2026-02-20 02:44:33.188075 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188087 | orchestrator | 2026-02-20 02:44:33.188097 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-20 02:44:33.188108 | orchestrator | Friday 20 February 2026 02:44:29 +0000 (0:00:00.154) 0:00:18.142 ******* 2026-02-20 02:44:33.188119 | orchestrator | ok: [testbed-node-3] => { 2026-02-20 02:44:33.188130 | orchestrator |  "vgs_report": { 2026-02-20 02:44:33.188141 | orchestrator |  "vg": [] 2026-02-20 02:44:33.188152 | orchestrator |  } 2026-02-20 02:44:33.188163 | orchestrator | } 2026-02-20 02:44:33.188174 | orchestrator | 2026-02-20 02:44:33.188185 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-20 02:44:33.188196 | orchestrator | Friday 20 February 2026 02:44:29 +0000 (0:00:00.153) 0:00:18.295 ******* 2026-02-20 02:44:33.188207 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188217 | orchestrator | 2026-02-20 02:44:33.188228 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-20 02:44:33.188239 | orchestrator | Friday 20 February 2026 02:44:29 +0000 (0:00:00.135) 0:00:18.431 ******* 2026-02-20 02:44:33.188249 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188260 | orchestrator | 2026-02-20 02:44:33.188271 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-20 02:44:33.188282 | orchestrator | Friday 20 February 2026 02:44:30 +0000 (0:00:00.352) 0:00:18.783 ******* 2026-02-20 02:44:33.188292 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188314 | orchestrator | 2026-02-20 02:44:33.188325 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-20 02:44:33.188336 | orchestrator | Friday 20 February 2026 02:44:30 +0000 (0:00:00.134) 0:00:18.918 ******* 2026-02-20 02:44:33.188346 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188357 | orchestrator | 2026-02-20 02:44:33.188368 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-20 02:44:33.188379 | orchestrator | Friday 20 February 2026 02:44:30 +0000 (0:00:00.137) 0:00:19.055 ******* 2026-02-20 02:44:33.188389 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188400 | orchestrator | 2026-02-20 02:44:33.188410 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-20 02:44:33.188421 | orchestrator | Friday 20 February 2026 02:44:30 +0000 (0:00:00.138) 0:00:19.193 ******* 2026-02-20 02:44:33.188432 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188442 | orchestrator | 2026-02-20 02:44:33.188453 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-20 02:44:33.188464 | orchestrator | Friday 20 February 2026 02:44:30 +0000 (0:00:00.162) 0:00:19.356 ******* 2026-02-20 02:44:33.188474 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188485 | orchestrator | 2026-02-20 02:44:33.188495 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-20 02:44:33.188506 | orchestrator | Friday 20 February 2026 02:44:30 +0000 (0:00:00.147) 0:00:19.504 ******* 2026-02-20 02:44:33.188535 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188546 | orchestrator | 2026-02-20 02:44:33.188557 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-20 02:44:33.188568 | orchestrator | Friday 20 February 2026 02:44:31 +0000 (0:00:00.137) 0:00:19.641 ******* 2026-02-20 02:44:33.188579 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188590 | orchestrator | 2026-02-20 02:44:33.188609 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-20 02:44:33.188620 | orchestrator | Friday 20 February 2026 02:44:31 +0000 (0:00:00.142) 0:00:19.784 ******* 2026-02-20 02:44:33.188631 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188642 | orchestrator | 2026-02-20 02:44:33.188658 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-20 02:44:33.188669 | orchestrator | Friday 20 February 2026 02:44:31 +0000 (0:00:00.179) 0:00:19.963 ******* 2026-02-20 02:44:33.188680 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188691 | orchestrator | 2026-02-20 02:44:33.188702 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-20 02:44:33.188712 | orchestrator | Friday 20 February 2026 02:44:31 +0000 (0:00:00.146) 0:00:20.110 ******* 2026-02-20 02:44:33.188723 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188734 | orchestrator | 2026-02-20 02:44:33.188745 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-20 02:44:33.188756 | orchestrator | Friday 20 February 2026 02:44:31 +0000 (0:00:00.147) 0:00:20.258 ******* 2026-02-20 02:44:33.188766 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188777 | orchestrator | 2026-02-20 02:44:33.188788 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-20 02:44:33.188799 | orchestrator | Friday 20 February 2026 02:44:31 +0000 (0:00:00.151) 0:00:20.409 ******* 2026-02-20 02:44:33.188810 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188821 | orchestrator | 2026-02-20 02:44:33.188831 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-20 02:44:33.188842 | orchestrator | Friday 20 February 2026 02:44:32 +0000 (0:00:00.339) 0:00:20.749 ******* 2026-02-20 02:44:33.188854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:33.188867 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:33.188878 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188889 | orchestrator | 2026-02-20 02:44:33.188900 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-20 02:44:33.188911 | orchestrator | Friday 20 February 2026 02:44:32 +0000 (0:00:00.174) 0:00:20.924 ******* 2026-02-20 02:44:33.188922 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:33.188933 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:33.188944 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.188955 | orchestrator | 2026-02-20 02:44:33.188966 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-20 02:44:33.188977 | orchestrator | Friday 20 February 2026 02:44:32 +0000 (0:00:00.173) 0:00:21.097 ******* 2026-02-20 02:44:33.188987 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:33.188998 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:33.189009 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.189020 | orchestrator | 2026-02-20 02:44:33.189031 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-20 02:44:33.189042 | orchestrator | Friday 20 February 2026 02:44:32 +0000 (0:00:00.171) 0:00:21.268 ******* 2026-02-20 02:44:33.189084 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:33.189102 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:33.189114 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.189124 | orchestrator | 2026-02-20 02:44:33.189135 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-20 02:44:33.189146 | orchestrator | Friday 20 February 2026 02:44:32 +0000 (0:00:00.184) 0:00:21.453 ******* 2026-02-20 02:44:33.189157 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:33.189168 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:33.189178 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:33.189189 | orchestrator | 2026-02-20 02:44:33.189200 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-20 02:44:33.189211 | orchestrator | Friday 20 February 2026 02:44:33 +0000 (0:00:00.149) 0:00:21.603 ******* 2026-02-20 02:44:33.189229 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:38.640816 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:38.641021 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:38.641104 | orchestrator | 2026-02-20 02:44:38.641130 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-20 02:44:38.641170 | orchestrator | Friday 20 February 2026 02:44:33 +0000 (0:00:00.156) 0:00:21.760 ******* 2026-02-20 02:44:38.641182 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:38.641194 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:38.641205 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:38.641223 | orchestrator | 2026-02-20 02:44:38.641241 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-20 02:44:38.641257 | orchestrator | Friday 20 February 2026 02:44:33 +0000 (0:00:00.150) 0:00:21.910 ******* 2026-02-20 02:44:38.641273 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:38.641289 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:38.641305 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:38.641322 | orchestrator | 2026-02-20 02:44:38.641338 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-20 02:44:38.641354 | orchestrator | Friday 20 February 2026 02:44:33 +0000 (0:00:00.156) 0:00:22.067 ******* 2026-02-20 02:44:38.641373 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:44:38.641394 | orchestrator | 2026-02-20 02:44:38.641411 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-20 02:44:38.641427 | orchestrator | Friday 20 February 2026 02:44:34 +0000 (0:00:00.548) 0:00:22.616 ******* 2026-02-20 02:44:38.641444 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:44:38.641461 | orchestrator | 2026-02-20 02:44:38.641479 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-20 02:44:38.641496 | orchestrator | Friday 20 February 2026 02:44:34 +0000 (0:00:00.565) 0:00:23.182 ******* 2026-02-20 02:44:38.641514 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:44:38.641532 | orchestrator | 2026-02-20 02:44:38.641552 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-20 02:44:38.641604 | orchestrator | Friday 20 February 2026 02:44:34 +0000 (0:00:00.160) 0:00:23.343 ******* 2026-02-20 02:44:38.641619 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'vg_name': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'}) 2026-02-20 02:44:38.641632 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'vg_name': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'}) 2026-02-20 02:44:38.641644 | orchestrator | 2026-02-20 02:44:38.641656 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-20 02:44:38.641669 | orchestrator | Friday 20 February 2026 02:44:34 +0000 (0:00:00.168) 0:00:23.512 ******* 2026-02-20 02:44:38.641681 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:38.641692 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:38.641702 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:38.641713 | orchestrator | 2026-02-20 02:44:38.641724 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-20 02:44:38.641734 | orchestrator | Friday 20 February 2026 02:44:35 +0000 (0:00:00.366) 0:00:23.878 ******* 2026-02-20 02:44:38.641745 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:38.641756 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:38.641766 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:38.641777 | orchestrator | 2026-02-20 02:44:38.641787 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-20 02:44:38.641798 | orchestrator | Friday 20 February 2026 02:44:35 +0000 (0:00:00.172) 0:00:24.050 ******* 2026-02-20 02:44:38.641808 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 02:44:38.641819 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 02:44:38.641830 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:44:38.641841 | orchestrator | 2026-02-20 02:44:38.641851 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-20 02:44:38.641862 | orchestrator | Friday 20 February 2026 02:44:35 +0000 (0:00:00.151) 0:00:24.201 ******* 2026-02-20 02:44:38.641894 | orchestrator | ok: [testbed-node-3] => { 2026-02-20 02:44:38.641906 | orchestrator |  "lvm_report": { 2026-02-20 02:44:38.641917 | orchestrator |  "lv": [ 2026-02-20 02:44:38.641928 | orchestrator |  { 2026-02-20 02:44:38.641939 | orchestrator |  "lv_name": "osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f", 2026-02-20 02:44:38.641950 | orchestrator |  "vg_name": "ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f" 2026-02-20 02:44:38.641961 | orchestrator |  }, 2026-02-20 02:44:38.641971 | orchestrator |  { 2026-02-20 02:44:38.641990 | orchestrator |  "lv_name": "osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2", 2026-02-20 02:44:38.642001 | orchestrator |  "vg_name": "ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2" 2026-02-20 02:44:38.642012 | orchestrator |  } 2026-02-20 02:44:38.642125 | orchestrator |  ], 2026-02-20 02:44:38.642136 | orchestrator |  "pv": [ 2026-02-20 02:44:38.642147 | orchestrator |  { 2026-02-20 02:44:38.642158 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-20 02:44:38.642169 | orchestrator |  "vg_name": "ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f" 2026-02-20 02:44:38.642190 | orchestrator |  }, 2026-02-20 02:44:38.642200 | orchestrator |  { 2026-02-20 02:44:38.642211 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-20 02:44:38.642230 | orchestrator |  "vg_name": "ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2" 2026-02-20 02:44:38.642250 | orchestrator |  } 2026-02-20 02:44:38.642277 | orchestrator |  ] 2026-02-20 02:44:38.642296 | orchestrator |  } 2026-02-20 02:44:38.642313 | orchestrator | } 2026-02-20 02:44:38.642331 | orchestrator | 2026-02-20 02:44:38.642350 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-20 02:44:38.642369 | orchestrator | 2026-02-20 02:44:38.642386 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-20 02:44:38.642404 | orchestrator | Friday 20 February 2026 02:44:35 +0000 (0:00:00.312) 0:00:24.513 ******* 2026-02-20 02:44:38.642422 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-20 02:44:38.642441 | orchestrator | 2026-02-20 02:44:38.642460 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-20 02:44:38.642479 | orchestrator | Friday 20 February 2026 02:44:36 +0000 (0:00:00.262) 0:00:24.776 ******* 2026-02-20 02:44:38.642496 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:44:38.642511 | orchestrator | 2026-02-20 02:44:38.642522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:38.642532 | orchestrator | Friday 20 February 2026 02:44:36 +0000 (0:00:00.237) 0:00:25.014 ******* 2026-02-20 02:44:38.642543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-20 02:44:38.642553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-20 02:44:38.642564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-20 02:44:38.642574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-20 02:44:38.642585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-20 02:44:38.642595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-20 02:44:38.642606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-20 02:44:38.642616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-20 02:44:38.642627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-20 02:44:38.642638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-20 02:44:38.642648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-20 02:44:38.642659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-20 02:44:38.642669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-20 02:44:38.642680 | orchestrator | 2026-02-20 02:44:38.642690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:38.642701 | orchestrator | Friday 20 February 2026 02:44:36 +0000 (0:00:00.420) 0:00:25.434 ******* 2026-02-20 02:44:38.642711 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:38.642722 | orchestrator | 2026-02-20 02:44:38.642732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:38.642743 | orchestrator | Friday 20 February 2026 02:44:37 +0000 (0:00:00.220) 0:00:25.655 ******* 2026-02-20 02:44:38.642754 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:38.642765 | orchestrator | 2026-02-20 02:44:38.642775 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:38.642786 | orchestrator | Friday 20 February 2026 02:44:37 +0000 (0:00:00.677) 0:00:26.333 ******* 2026-02-20 02:44:38.642797 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:38.642819 | orchestrator | 2026-02-20 02:44:38.642829 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:38.642840 | orchestrator | Friday 20 February 2026 02:44:37 +0000 (0:00:00.225) 0:00:26.558 ******* 2026-02-20 02:44:38.642851 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:38.642864 | orchestrator | 2026-02-20 02:44:38.642882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:38.642909 | orchestrator | Friday 20 February 2026 02:44:38 +0000 (0:00:00.239) 0:00:26.798 ******* 2026-02-20 02:44:38.642929 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:38.642946 | orchestrator | 2026-02-20 02:44:38.642963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:38.642980 | orchestrator | Friday 20 February 2026 02:44:38 +0000 (0:00:00.215) 0:00:27.014 ******* 2026-02-20 02:44:38.642998 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:38.643013 | orchestrator | 2026-02-20 02:44:38.643045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:49.919650 | orchestrator | Friday 20 February 2026 02:44:38 +0000 (0:00:00.203) 0:00:27.217 ******* 2026-02-20 02:44:49.919790 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:49.919819 | orchestrator | 2026-02-20 02:44:49.919840 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:49.919859 | orchestrator | Friday 20 February 2026 02:44:38 +0000 (0:00:00.241) 0:00:27.458 ******* 2026-02-20 02:44:49.919897 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:49.919917 | orchestrator | 2026-02-20 02:44:49.919936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:49.919947 | orchestrator | Friday 20 February 2026 02:44:39 +0000 (0:00:00.199) 0:00:27.658 ******* 2026-02-20 02:44:49.919958 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178) 2026-02-20 02:44:49.919978 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178) 2026-02-20 02:44:49.919996 | orchestrator | 2026-02-20 02:44:49.920015 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:49.920034 | orchestrator | Friday 20 February 2026 02:44:39 +0000 (0:00:00.432) 0:00:28.091 ******* 2026-02-20 02:44:49.920051 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6) 2026-02-20 02:44:49.920145 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6) 2026-02-20 02:44:49.920166 | orchestrator | 2026-02-20 02:44:49.920185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:49.920202 | orchestrator | Friday 20 February 2026 02:44:39 +0000 (0:00:00.456) 0:00:28.547 ******* 2026-02-20 02:44:49.920219 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289) 2026-02-20 02:44:49.920236 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289) 2026-02-20 02:44:49.920254 | orchestrator | 2026-02-20 02:44:49.920272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:49.920290 | orchestrator | Friday 20 February 2026 02:44:40 +0000 (0:00:00.721) 0:00:29.268 ******* 2026-02-20 02:44:49.920308 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca) 2026-02-20 02:44:49.920327 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca) 2026-02-20 02:44:49.920345 | orchestrator | 2026-02-20 02:44:49.920363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:44:49.920381 | orchestrator | Friday 20 February 2026 02:44:41 +0000 (0:00:00.891) 0:00:30.159 ******* 2026-02-20 02:44:49.920398 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-20 02:44:49.920416 | orchestrator | 2026-02-20 02:44:49.920433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:49.920484 | orchestrator | Friday 20 February 2026 02:44:41 +0000 (0:00:00.371) 0:00:30.530 ******* 2026-02-20 02:44:49.920503 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-20 02:44:49.920521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-20 02:44:49.920537 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-20 02:44:49.920555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-20 02:44:49.920572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-20 02:44:49.920606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-20 02:44:49.920623 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-20 02:44:49.920640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-20 02:44:49.920656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-20 02:44:49.920672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-20 02:44:49.920688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-20 02:44:49.920704 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-20 02:44:49.920806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-20 02:44:49.920825 | orchestrator | 2026-02-20 02:44:49.920845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:49.920864 | orchestrator | Friday 20 February 2026 02:44:42 +0000 (0:00:00.423) 0:00:30.953 ******* 2026-02-20 02:44:49.920883 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:49.920902 | orchestrator | 2026-02-20 02:44:49.920921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:49.920940 | orchestrator | Friday 20 February 2026 02:44:42 +0000 (0:00:00.200) 0:00:31.154 ******* 2026-02-20 02:44:49.920958 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:49.920979 | orchestrator | 2026-02-20 02:44:49.920998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:49.921019 | orchestrator | Friday 20 February 2026 02:44:42 +0000 (0:00:00.198) 0:00:31.352 ******* 2026-02-20 02:44:49.921039 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:49.921059 | orchestrator | 2026-02-20 02:44:49.921139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:49.921160 | orchestrator | Friday 20 February 2026 02:44:42 +0000 (0:00:00.208) 0:00:31.561 ******* 2026-02-20 02:44:49.921180 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:49.921197 | orchestrator | 2026-02-20 02:44:49.921216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:49.921249 | orchestrator | Friday 20 February 2026 02:44:43 +0000 (0:00:00.218) 0:00:31.780 ******* 2026-02-20 02:44:49.921270 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:49.921289 | orchestrator | 2026-02-20 02:44:49.921309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:49.921328 | orchestrator | Friday 20 February 2026 02:44:43 +0000 (0:00:00.209) 0:00:31.989 ******* 2026-02-20 02:44:49.921346 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:49.921365 | orchestrator | 2026-02-20 02:44:49.921383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:49.921402 | orchestrator | Friday 20 February 2026 02:44:43 +0000 (0:00:00.201) 0:00:32.191 ******* 2026-02-20 02:44:49.921421 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:49.921438 | orchestrator | 2026-02-20 02:44:49.921457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:49.921495 | orchestrator | Friday 20 February 2026 02:44:43 +0000 (0:00:00.239) 0:00:32.431 ******* 2026-02-20 02:44:49.921514 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:49.921533 | orchestrator | 2026-02-20 02:44:49.921552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:49.921570 | orchestrator | Friday 20 February 2026 02:44:44 +0000 (0:00:00.659) 0:00:33.090 ******* 2026-02-20 02:44:49.921587 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-20 02:44:49.921606 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-20 02:44:49.921624 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-20 02:44:49.921642 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-20 02:44:49.921660 | orchestrator | 2026-02-20 02:44:49.921679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:49.921697 | orchestrator | Friday 20 February 2026 02:44:45 +0000 (0:00:00.733) 0:00:33.823 ******* 2026-02-20 02:44:49.921716 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:49.921735 | orchestrator | 2026-02-20 02:44:49.921753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:49.921773 | orchestrator | Friday 20 February 2026 02:44:45 +0000 (0:00:00.217) 0:00:34.041 ******* 2026-02-20 02:44:49.921791 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:49.921809 | orchestrator | 2026-02-20 02:44:49.921827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:49.921846 | orchestrator | Friday 20 February 2026 02:44:45 +0000 (0:00:00.213) 0:00:34.254 ******* 2026-02-20 02:44:49.921864 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:49.921883 | orchestrator | 2026-02-20 02:44:49.921896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:44:49.921907 | orchestrator | Friday 20 February 2026 02:44:45 +0000 (0:00:00.236) 0:00:34.491 ******* 2026-02-20 02:44:49.921918 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:49.921928 | orchestrator | 2026-02-20 02:44:49.921939 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-20 02:44:49.921950 | orchestrator | Friday 20 February 2026 02:44:46 +0000 (0:00:00.235) 0:00:34.726 ******* 2026-02-20 02:44:49.921960 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:49.921971 | orchestrator | 2026-02-20 02:44:49.921981 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-20 02:44:49.921992 | orchestrator | Friday 20 February 2026 02:44:46 +0000 (0:00:00.140) 0:00:34.867 ******* 2026-02-20 02:44:49.922003 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ad1d47ce-3300-5f5f-a456-60212d7294ef'}}) 2026-02-20 02:44:49.922106 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'}}) 2026-02-20 02:44:49.922124 | orchestrator | 2026-02-20 02:44:49.922135 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-20 02:44:49.922146 | orchestrator | Friday 20 February 2026 02:44:46 +0000 (0:00:00.188) 0:00:35.055 ******* 2026-02-20 02:44:49.922158 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'}) 2026-02-20 02:44:49.922170 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'}) 2026-02-20 02:44:49.922179 | orchestrator | 2026-02-20 02:44:49.922189 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-20 02:44:49.922199 | orchestrator | Friday 20 February 2026 02:44:48 +0000 (0:00:01.881) 0:00:36.937 ******* 2026-02-20 02:44:49.922209 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:44:49.922220 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:44:49.922240 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:49.922250 | orchestrator | 2026-02-20 02:44:49.922259 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-20 02:44:49.922269 | orchestrator | Friday 20 February 2026 02:44:48 +0000 (0:00:00.214) 0:00:37.152 ******* 2026-02-20 02:44:49.922278 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'}) 2026-02-20 02:44:49.922300 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'}) 2026-02-20 02:44:55.850874 | orchestrator | 2026-02-20 02:44:55.850964 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-20 02:44:55.850977 | orchestrator | Friday 20 February 2026 02:44:49 +0000 (0:00:01.339) 0:00:38.492 ******* 2026-02-20 02:44:55.850999 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:44:55.851009 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:44:55.851017 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.851025 | orchestrator | 2026-02-20 02:44:55.851036 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-20 02:44:55.851049 | orchestrator | Friday 20 February 2026 02:44:50 +0000 (0:00:00.385) 0:00:38.877 ******* 2026-02-20 02:44:55.851062 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.851133 | orchestrator | 2026-02-20 02:44:55.851143 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-20 02:44:55.851150 | orchestrator | Friday 20 February 2026 02:44:50 +0000 (0:00:00.142) 0:00:39.020 ******* 2026-02-20 02:44:55.851158 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:44:55.851165 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:44:55.851173 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.851180 | orchestrator | 2026-02-20 02:44:55.851187 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-20 02:44:55.851195 | orchestrator | Friday 20 February 2026 02:44:50 +0000 (0:00:00.200) 0:00:39.221 ******* 2026-02-20 02:44:55.851202 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.851209 | orchestrator | 2026-02-20 02:44:55.851217 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-20 02:44:55.851224 | orchestrator | Friday 20 February 2026 02:44:50 +0000 (0:00:00.144) 0:00:39.365 ******* 2026-02-20 02:44:55.851232 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:44:55.851239 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:44:55.851247 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.851254 | orchestrator | 2026-02-20 02:44:55.851261 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-20 02:44:55.851268 | orchestrator | Friday 20 February 2026 02:44:50 +0000 (0:00:00.202) 0:00:39.568 ******* 2026-02-20 02:44:55.851276 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.851283 | orchestrator | 2026-02-20 02:44:55.851290 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-20 02:44:55.851297 | orchestrator | Friday 20 February 2026 02:44:51 +0000 (0:00:00.148) 0:00:39.716 ******* 2026-02-20 02:44:55.851325 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:44:55.851333 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:44:55.851340 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.851347 | orchestrator | 2026-02-20 02:44:55.851355 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-20 02:44:55.851374 | orchestrator | Friday 20 February 2026 02:44:51 +0000 (0:00:00.161) 0:00:39.877 ******* 2026-02-20 02:44:55.851389 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:44:55.851398 | orchestrator | 2026-02-20 02:44:55.851405 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-20 02:44:55.851412 | orchestrator | Friday 20 February 2026 02:44:51 +0000 (0:00:00.191) 0:00:40.069 ******* 2026-02-20 02:44:55.851419 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:44:55.851428 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:44:55.851436 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.851444 | orchestrator | 2026-02-20 02:44:55.851453 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-20 02:44:55.851461 | orchestrator | Friday 20 February 2026 02:44:51 +0000 (0:00:00.169) 0:00:40.238 ******* 2026-02-20 02:44:55.851470 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:44:55.851478 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:44:55.851486 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.851494 | orchestrator | 2026-02-20 02:44:55.851502 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-20 02:44:55.851524 | orchestrator | Friday 20 February 2026 02:44:51 +0000 (0:00:00.151) 0:00:40.389 ******* 2026-02-20 02:44:55.851537 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:44:55.851546 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:44:55.851555 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.851563 | orchestrator | 2026-02-20 02:44:55.851571 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-20 02:44:55.851580 | orchestrator | Friday 20 February 2026 02:44:51 +0000 (0:00:00.148) 0:00:40.538 ******* 2026-02-20 02:44:55.851588 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.851596 | orchestrator | 2026-02-20 02:44:55.851604 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-20 02:44:55.851613 | orchestrator | Friday 20 February 2026 02:44:52 +0000 (0:00:00.326) 0:00:40.865 ******* 2026-02-20 02:44:55.851621 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.851629 | orchestrator | 2026-02-20 02:44:55.851637 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-20 02:44:55.851646 | orchestrator | Friday 20 February 2026 02:44:52 +0000 (0:00:00.146) 0:00:41.011 ******* 2026-02-20 02:44:55.851654 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.851662 | orchestrator | 2026-02-20 02:44:55.851670 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-20 02:44:55.851678 | orchestrator | Friday 20 February 2026 02:44:52 +0000 (0:00:00.139) 0:00:41.151 ******* 2026-02-20 02:44:55.851687 | orchestrator | ok: [testbed-node-4] => { 2026-02-20 02:44:55.851701 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-20 02:44:55.851710 | orchestrator | } 2026-02-20 02:44:55.851719 | orchestrator | 2026-02-20 02:44:55.851727 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-20 02:44:55.851735 | orchestrator | Friday 20 February 2026 02:44:52 +0000 (0:00:00.144) 0:00:41.295 ******* 2026-02-20 02:44:55.851743 | orchestrator | ok: [testbed-node-4] => { 2026-02-20 02:44:55.851751 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-20 02:44:55.851759 | orchestrator | } 2026-02-20 02:44:55.851768 | orchestrator | 2026-02-20 02:44:55.851777 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-20 02:44:55.851785 | orchestrator | Friday 20 February 2026 02:44:52 +0000 (0:00:00.143) 0:00:41.439 ******* 2026-02-20 02:44:55.851792 | orchestrator | ok: [testbed-node-4] => { 2026-02-20 02:44:55.851799 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-20 02:44:55.851807 | orchestrator | } 2026-02-20 02:44:55.851814 | orchestrator | 2026-02-20 02:44:55.851821 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-20 02:44:55.851828 | orchestrator | Friday 20 February 2026 02:44:53 +0000 (0:00:00.158) 0:00:41.597 ******* 2026-02-20 02:44:55.851836 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:44:55.851843 | orchestrator | 2026-02-20 02:44:55.851850 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-20 02:44:55.851857 | orchestrator | Friday 20 February 2026 02:44:53 +0000 (0:00:00.510) 0:00:42.108 ******* 2026-02-20 02:44:55.851864 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:44:55.851872 | orchestrator | 2026-02-20 02:44:55.851879 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-20 02:44:55.851886 | orchestrator | Friday 20 February 2026 02:44:54 +0000 (0:00:00.542) 0:00:42.651 ******* 2026-02-20 02:44:55.851893 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:44:55.851900 | orchestrator | 2026-02-20 02:44:55.851908 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-20 02:44:55.851915 | orchestrator | Friday 20 February 2026 02:44:54 +0000 (0:00:00.514) 0:00:43.165 ******* 2026-02-20 02:44:55.851922 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:44:55.851929 | orchestrator | 2026-02-20 02:44:55.851936 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-20 02:44:55.851944 | orchestrator | Friday 20 February 2026 02:44:54 +0000 (0:00:00.150) 0:00:43.316 ******* 2026-02-20 02:44:55.851951 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.851958 | orchestrator | 2026-02-20 02:44:55.851965 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-20 02:44:55.851972 | orchestrator | Friday 20 February 2026 02:44:54 +0000 (0:00:00.110) 0:00:43.427 ******* 2026-02-20 02:44:55.851980 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.851987 | orchestrator | 2026-02-20 02:44:55.851994 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-20 02:44:55.852001 | orchestrator | Friday 20 February 2026 02:44:55 +0000 (0:00:00.285) 0:00:43.713 ******* 2026-02-20 02:44:55.852008 | orchestrator | ok: [testbed-node-4] => { 2026-02-20 02:44:55.852016 | orchestrator |  "vgs_report": { 2026-02-20 02:44:55.852023 | orchestrator |  "vg": [] 2026-02-20 02:44:55.852033 | orchestrator |  } 2026-02-20 02:44:55.852045 | orchestrator | } 2026-02-20 02:44:55.852058 | orchestrator | 2026-02-20 02:44:55.852108 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-20 02:44:55.852123 | orchestrator | Friday 20 February 2026 02:44:55 +0000 (0:00:00.148) 0:00:43.861 ******* 2026-02-20 02:44:55.852135 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.852146 | orchestrator | 2026-02-20 02:44:55.852153 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-20 02:44:55.852161 | orchestrator | Friday 20 February 2026 02:44:55 +0000 (0:00:00.139) 0:00:44.000 ******* 2026-02-20 02:44:55.852168 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.852175 | orchestrator | 2026-02-20 02:44:55.852188 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-20 02:44:55.852195 | orchestrator | Friday 20 February 2026 02:44:55 +0000 (0:00:00.138) 0:00:44.139 ******* 2026-02-20 02:44:55.852202 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.852210 | orchestrator | 2026-02-20 02:44:55.852217 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-20 02:44:55.852224 | orchestrator | Friday 20 February 2026 02:44:55 +0000 (0:00:00.140) 0:00:44.280 ******* 2026-02-20 02:44:55.852231 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:44:55.852238 | orchestrator | 2026-02-20 02:44:55.852251 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-20 02:45:00.612044 | orchestrator | Friday 20 February 2026 02:44:55 +0000 (0:00:00.147) 0:00:44.428 ******* 2026-02-20 02:45:00.612202 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.612220 | orchestrator | 2026-02-20 02:45:00.612232 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-20 02:45:00.612244 | orchestrator | Friday 20 February 2026 02:44:56 +0000 (0:00:00.161) 0:00:44.589 ******* 2026-02-20 02:45:00.612254 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.612266 | orchestrator | 2026-02-20 02:45:00.612277 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-20 02:45:00.612288 | orchestrator | Friday 20 February 2026 02:44:56 +0000 (0:00:00.138) 0:00:44.728 ******* 2026-02-20 02:45:00.612299 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.612310 | orchestrator | 2026-02-20 02:45:00.612321 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-20 02:45:00.612332 | orchestrator | Friday 20 February 2026 02:44:56 +0000 (0:00:00.127) 0:00:44.855 ******* 2026-02-20 02:45:00.612342 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.612353 | orchestrator | 2026-02-20 02:45:00.612364 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-20 02:45:00.612375 | orchestrator | Friday 20 February 2026 02:44:56 +0000 (0:00:00.142) 0:00:44.997 ******* 2026-02-20 02:45:00.612386 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.612397 | orchestrator | 2026-02-20 02:45:00.612408 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-20 02:45:00.612419 | orchestrator | Friday 20 February 2026 02:44:56 +0000 (0:00:00.124) 0:00:45.122 ******* 2026-02-20 02:45:00.612431 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.612442 | orchestrator | 2026-02-20 02:45:00.612453 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-20 02:45:00.612464 | orchestrator | Friday 20 February 2026 02:44:56 +0000 (0:00:00.315) 0:00:45.438 ******* 2026-02-20 02:45:00.612475 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.612486 | orchestrator | 2026-02-20 02:45:00.612497 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-20 02:45:00.612507 | orchestrator | Friday 20 February 2026 02:44:56 +0000 (0:00:00.142) 0:00:45.580 ******* 2026-02-20 02:45:00.612518 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.612529 | orchestrator | 2026-02-20 02:45:00.612540 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-20 02:45:00.612551 | orchestrator | Friday 20 February 2026 02:44:57 +0000 (0:00:00.143) 0:00:45.723 ******* 2026-02-20 02:45:00.612562 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.612573 | orchestrator | 2026-02-20 02:45:00.612584 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-20 02:45:00.612595 | orchestrator | Friday 20 February 2026 02:44:57 +0000 (0:00:00.144) 0:00:45.868 ******* 2026-02-20 02:45:00.612606 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.612617 | orchestrator | 2026-02-20 02:45:00.612628 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-20 02:45:00.612639 | orchestrator | Friday 20 February 2026 02:44:57 +0000 (0:00:00.146) 0:00:46.014 ******* 2026-02-20 02:45:00.612651 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:45:00.612686 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:45:00.612698 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.612709 | orchestrator | 2026-02-20 02:45:00.612720 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-20 02:45:00.612730 | orchestrator | Friday 20 February 2026 02:44:57 +0000 (0:00:00.165) 0:00:46.180 ******* 2026-02-20 02:45:00.612742 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:45:00.612753 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:45:00.612764 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.612774 | orchestrator | 2026-02-20 02:45:00.612785 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-20 02:45:00.612796 | orchestrator | Friday 20 February 2026 02:44:57 +0000 (0:00:00.159) 0:00:46.340 ******* 2026-02-20 02:45:00.612808 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:45:00.612819 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:45:00.612830 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.612841 | orchestrator | 2026-02-20 02:45:00.612852 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-20 02:45:00.612863 | orchestrator | Friday 20 February 2026 02:44:57 +0000 (0:00:00.167) 0:00:46.507 ******* 2026-02-20 02:45:00.612874 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:45:00.612885 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:45:00.612896 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.612907 | orchestrator | 2026-02-20 02:45:00.612933 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-20 02:45:00.612950 | orchestrator | Friday 20 February 2026 02:44:58 +0000 (0:00:00.169) 0:00:46.676 ******* 2026-02-20 02:45:00.612961 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:45:00.612973 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:45:00.612984 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.612995 | orchestrator | 2026-02-20 02:45:00.613006 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-20 02:45:00.613017 | orchestrator | Friday 20 February 2026 02:44:58 +0000 (0:00:00.164) 0:00:46.841 ******* 2026-02-20 02:45:00.613027 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:45:00.613038 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:45:00.613049 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.613060 | orchestrator | 2026-02-20 02:45:00.613071 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-20 02:45:00.613124 | orchestrator | Friday 20 February 2026 02:44:58 +0000 (0:00:00.143) 0:00:46.984 ******* 2026-02-20 02:45:00.613143 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:45:00.613155 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:45:00.613166 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.613178 | orchestrator | 2026-02-20 02:45:00.613188 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-20 02:45:00.613200 | orchestrator | Friday 20 February 2026 02:44:58 +0000 (0:00:00.351) 0:00:47.336 ******* 2026-02-20 02:45:00.613211 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:45:00.613222 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:45:00.613233 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.613243 | orchestrator | 2026-02-20 02:45:00.613254 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-20 02:45:00.613265 | orchestrator | Friday 20 February 2026 02:44:58 +0000 (0:00:00.159) 0:00:47.496 ******* 2026-02-20 02:45:00.613276 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:45:00.613288 | orchestrator | 2026-02-20 02:45:00.613298 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-20 02:45:00.613310 | orchestrator | Friday 20 February 2026 02:44:59 +0000 (0:00:00.554) 0:00:48.050 ******* 2026-02-20 02:45:00.613320 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:45:00.613331 | orchestrator | 2026-02-20 02:45:00.613342 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-20 02:45:00.613353 | orchestrator | Friday 20 February 2026 02:44:59 +0000 (0:00:00.503) 0:00:48.553 ******* 2026-02-20 02:45:00.613364 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:45:00.613375 | orchestrator | 2026-02-20 02:45:00.613386 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-20 02:45:00.613397 | orchestrator | Friday 20 February 2026 02:45:00 +0000 (0:00:00.143) 0:00:48.696 ******* 2026-02-20 02:45:00.613408 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'vg_name': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'}) 2026-02-20 02:45:00.613420 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'vg_name': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'}) 2026-02-20 02:45:00.613431 | orchestrator | 2026-02-20 02:45:00.613442 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-20 02:45:00.613453 | orchestrator | Friday 20 February 2026 02:45:00 +0000 (0:00:00.175) 0:00:48.872 ******* 2026-02-20 02:45:00.613464 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:45:00.613476 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:45:00.613487 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:00.613498 | orchestrator | 2026-02-20 02:45:00.613509 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-20 02:45:00.613520 | orchestrator | Friday 20 February 2026 02:45:00 +0000 (0:00:00.163) 0:00:49.036 ******* 2026-02-20 02:45:00.613531 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:45:00.613549 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:45:06.934142 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:06.934271 | orchestrator | 2026-02-20 02:45:06.934305 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-20 02:45:06.934318 | orchestrator | Friday 20 February 2026 02:45:00 +0000 (0:00:00.153) 0:00:49.189 ******* 2026-02-20 02:45:06.934330 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 02:45:06.934343 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 02:45:06.934354 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:06.934366 | orchestrator | 2026-02-20 02:45:06.934377 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-20 02:45:06.934388 | orchestrator | Friday 20 February 2026 02:45:00 +0000 (0:00:00.161) 0:00:49.351 ******* 2026-02-20 02:45:06.934399 | orchestrator | ok: [testbed-node-4] => { 2026-02-20 02:45:06.934410 | orchestrator |  "lvm_report": { 2026-02-20 02:45:06.934423 | orchestrator |  "lv": [ 2026-02-20 02:45:06.934434 | orchestrator |  { 2026-02-20 02:45:06.934445 | orchestrator |  "lv_name": "osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd", 2026-02-20 02:45:06.934457 | orchestrator |  "vg_name": "ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd" 2026-02-20 02:45:06.934468 | orchestrator |  }, 2026-02-20 02:45:06.934478 | orchestrator |  { 2026-02-20 02:45:06.934489 | orchestrator |  "lv_name": "osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef", 2026-02-20 02:45:06.934500 | orchestrator |  "vg_name": "ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef" 2026-02-20 02:45:06.934511 | orchestrator |  } 2026-02-20 02:45:06.934522 | orchestrator |  ], 2026-02-20 02:45:06.934533 | orchestrator |  "pv": [ 2026-02-20 02:45:06.934544 | orchestrator |  { 2026-02-20 02:45:06.934555 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-20 02:45:06.934566 | orchestrator |  "vg_name": "ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef" 2026-02-20 02:45:06.934577 | orchestrator |  }, 2026-02-20 02:45:06.934590 | orchestrator |  { 2026-02-20 02:45:06.934603 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-20 02:45:06.934615 | orchestrator |  "vg_name": "ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd" 2026-02-20 02:45:06.934628 | orchestrator |  } 2026-02-20 02:45:06.934641 | orchestrator |  ] 2026-02-20 02:45:06.934653 | orchestrator |  } 2026-02-20 02:45:06.934666 | orchestrator | } 2026-02-20 02:45:06.934679 | orchestrator | 2026-02-20 02:45:06.934691 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-20 02:45:06.934703 | orchestrator | 2026-02-20 02:45:06.934715 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-20 02:45:06.934728 | orchestrator | Friday 20 February 2026 02:45:01 +0000 (0:00:00.297) 0:00:49.648 ******* 2026-02-20 02:45:06.934740 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-20 02:45:06.934753 | orchestrator | 2026-02-20 02:45:06.934766 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-20 02:45:06.934780 | orchestrator | Friday 20 February 2026 02:45:01 +0000 (0:00:00.639) 0:00:50.288 ******* 2026-02-20 02:45:06.934792 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:45:06.934805 | orchestrator | 2026-02-20 02:45:06.934818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:45:06.934830 | orchestrator | Friday 20 February 2026 02:45:01 +0000 (0:00:00.251) 0:00:50.539 ******* 2026-02-20 02:45:06.934843 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-20 02:45:06.934855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-20 02:45:06.934868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-20 02:45:06.934909 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-20 02:45:06.934922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-20 02:45:06.934934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-20 02:45:06.934945 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-20 02:45:06.934956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-20 02:45:06.934967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-20 02:45:06.934978 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-20 02:45:06.934989 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-20 02:45:06.935000 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-20 02:45:06.935010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-20 02:45:06.935021 | orchestrator | 2026-02-20 02:45:06.935032 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:45:06.935043 | orchestrator | Friday 20 February 2026 02:45:02 +0000 (0:00:00.419) 0:00:50.959 ******* 2026-02-20 02:45:06.935054 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:06.935065 | orchestrator | 2026-02-20 02:45:06.935075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:45:06.935117 | orchestrator | Friday 20 February 2026 02:45:02 +0000 (0:00:00.209) 0:00:51.168 ******* 2026-02-20 02:45:06.935128 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:06.935139 | orchestrator | 2026-02-20 02:45:06.935150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:45:06.935181 | orchestrator | Friday 20 February 2026 02:45:02 +0000 (0:00:00.202) 0:00:51.371 ******* 2026-02-20 02:45:06.935193 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:06.935204 | orchestrator | 2026-02-20 02:45:06.935215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:45:06.935226 | orchestrator | Friday 20 February 2026 02:45:02 +0000 (0:00:00.200) 0:00:51.571 ******* 2026-02-20 02:45:06.935237 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:06.935248 | orchestrator | 2026-02-20 02:45:06.935259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:45:06.935270 | orchestrator | Friday 20 February 2026 02:45:03 +0000 (0:00:00.207) 0:00:51.779 ******* 2026-02-20 02:45:06.935281 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:06.935292 | orchestrator | 2026-02-20 02:45:06.935303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:45:06.935314 | orchestrator | Friday 20 February 2026 02:45:03 +0000 (0:00:00.201) 0:00:51.980 ******* 2026-02-20 02:45:06.935324 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:06.935335 | orchestrator | 2026-02-20 02:45:06.935346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:45:06.935357 | orchestrator | Friday 20 February 2026 02:45:03 +0000 (0:00:00.200) 0:00:52.180 ******* 2026-02-20 02:45:06.935368 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:06.935379 | orchestrator | 2026-02-20 02:45:06.935390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:45:06.935401 | orchestrator | Friday 20 February 2026 02:45:03 +0000 (0:00:00.199) 0:00:52.380 ******* 2026-02-20 02:45:06.935412 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:06.935422 | orchestrator | 2026-02-20 02:45:06.935433 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:45:06.935444 | orchestrator | Friday 20 February 2026 02:45:04 +0000 (0:00:00.599) 0:00:52.979 ******* 2026-02-20 02:45:06.935455 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c) 2026-02-20 02:45:06.935476 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c) 2026-02-20 02:45:06.935488 | orchestrator | 2026-02-20 02:45:06.935499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:45:06.935509 | orchestrator | Friday 20 February 2026 02:45:04 +0000 (0:00:00.434) 0:00:53.414 ******* 2026-02-20 02:45:06.935560 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57) 2026-02-20 02:45:06.935572 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57) 2026-02-20 02:45:06.935583 | orchestrator | 2026-02-20 02:45:06.935594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:45:06.935605 | orchestrator | Friday 20 February 2026 02:45:05 +0000 (0:00:00.425) 0:00:53.840 ******* 2026-02-20 02:45:06.935615 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9) 2026-02-20 02:45:06.935626 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9) 2026-02-20 02:45:06.935637 | orchestrator | 2026-02-20 02:45:06.935648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:45:06.935659 | orchestrator | Friday 20 February 2026 02:45:05 +0000 (0:00:00.431) 0:00:54.271 ******* 2026-02-20 02:45:06.935670 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8) 2026-02-20 02:45:06.935681 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8) 2026-02-20 02:45:06.935692 | orchestrator | 2026-02-20 02:45:06.935703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-20 02:45:06.935714 | orchestrator | Friday 20 February 2026 02:45:06 +0000 (0:00:00.456) 0:00:54.728 ******* 2026-02-20 02:45:06.935725 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-20 02:45:06.935736 | orchestrator | 2026-02-20 02:45:06.935747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:45:06.935758 | orchestrator | Friday 20 February 2026 02:45:06 +0000 (0:00:00.347) 0:00:55.076 ******* 2026-02-20 02:45:06.935768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-20 02:45:06.935779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-20 02:45:06.935790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-20 02:45:06.935801 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-20 02:45:06.935811 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-20 02:45:06.935823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-20 02:45:06.935833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-20 02:45:06.935844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-20 02:45:06.935855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-20 02:45:06.935866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-20 02:45:06.935876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-20 02:45:06.935900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-20 02:45:15.724394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-20 02:45:15.724484 | orchestrator | 2026-02-20 02:45:15.724496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:45:15.724523 | orchestrator | Friday 20 February 2026 02:45:06 +0000 (0:00:00.428) 0:00:55.505 ******* 2026-02-20 02:45:15.724531 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.724539 | orchestrator | 2026-02-20 02:45:15.724547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:45:15.724554 | orchestrator | Friday 20 February 2026 02:45:07 +0000 (0:00:00.198) 0:00:55.703 ******* 2026-02-20 02:45:15.724561 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.724568 | orchestrator | 2026-02-20 02:45:15.724576 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:45:15.724583 | orchestrator | Friday 20 February 2026 02:45:07 +0000 (0:00:00.250) 0:00:55.953 ******* 2026-02-20 02:45:15.724590 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.724597 | orchestrator | 2026-02-20 02:45:15.724604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:45:15.724612 | orchestrator | Friday 20 February 2026 02:45:07 +0000 (0:00:00.209) 0:00:56.162 ******* 2026-02-20 02:45:15.724619 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.724626 | orchestrator | 2026-02-20 02:45:15.724633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:45:15.724640 | orchestrator | Friday 20 February 2026 02:45:07 +0000 (0:00:00.201) 0:00:56.363 ******* 2026-02-20 02:45:15.724647 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.724654 | orchestrator | 2026-02-20 02:45:15.724661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:45:15.724668 | orchestrator | Friday 20 February 2026 02:45:08 +0000 (0:00:00.631) 0:00:56.994 ******* 2026-02-20 02:45:15.724675 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.724682 | orchestrator | 2026-02-20 02:45:15.724690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:45:15.724697 | orchestrator | Friday 20 February 2026 02:45:08 +0000 (0:00:00.213) 0:00:57.208 ******* 2026-02-20 02:45:15.724704 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.724711 | orchestrator | 2026-02-20 02:45:15.724719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:45:15.724726 | orchestrator | Friday 20 February 2026 02:45:08 +0000 (0:00:00.211) 0:00:57.420 ******* 2026-02-20 02:45:15.724733 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.724740 | orchestrator | 2026-02-20 02:45:15.724747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:45:15.724754 | orchestrator | Friday 20 February 2026 02:45:09 +0000 (0:00:00.201) 0:00:57.622 ******* 2026-02-20 02:45:15.724761 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-20 02:45:15.724769 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-20 02:45:15.724777 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-20 02:45:15.724784 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-20 02:45:15.724791 | orchestrator | 2026-02-20 02:45:15.724798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:45:15.724805 | orchestrator | Friday 20 February 2026 02:45:09 +0000 (0:00:00.622) 0:00:58.244 ******* 2026-02-20 02:45:15.724812 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.724819 | orchestrator | 2026-02-20 02:45:15.724826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:45:15.724833 | orchestrator | Friday 20 February 2026 02:45:09 +0000 (0:00:00.207) 0:00:58.451 ******* 2026-02-20 02:45:15.724841 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.724848 | orchestrator | 2026-02-20 02:45:15.724855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:45:15.724862 | orchestrator | Friday 20 February 2026 02:45:10 +0000 (0:00:00.211) 0:00:58.663 ******* 2026-02-20 02:45:15.724869 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.724876 | orchestrator | 2026-02-20 02:45:15.724883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-20 02:45:15.724897 | orchestrator | Friday 20 February 2026 02:45:10 +0000 (0:00:00.202) 0:00:58.866 ******* 2026-02-20 02:45:15.724904 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.724911 | orchestrator | 2026-02-20 02:45:15.724918 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-20 02:45:15.724925 | orchestrator | Friday 20 February 2026 02:45:10 +0000 (0:00:00.205) 0:00:59.071 ******* 2026-02-20 02:45:15.724932 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.724940 | orchestrator | 2026-02-20 02:45:15.724947 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-20 02:45:15.724956 | orchestrator | Friday 20 February 2026 02:45:10 +0000 (0:00:00.132) 0:00:59.204 ******* 2026-02-20 02:45:15.724965 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'}}) 2026-02-20 02:45:15.724973 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5fe77357-4c85-56ab-aabd-7cb5a18434f2'}}) 2026-02-20 02:45:15.724981 | orchestrator | 2026-02-20 02:45:15.724990 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-20 02:45:15.724998 | orchestrator | Friday 20 February 2026 02:45:10 +0000 (0:00:00.190) 0:00:59.394 ******* 2026-02-20 02:45:15.725007 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'}) 2026-02-20 02:45:15.725016 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'}) 2026-02-20 02:45:15.725024 | orchestrator | 2026-02-20 02:45:15.725032 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-20 02:45:15.725067 | orchestrator | Friday 20 February 2026 02:45:12 +0000 (0:00:01.862) 0:01:01.256 ******* 2026-02-20 02:45:15.725076 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:15.725105 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:15.725115 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.725123 | orchestrator | 2026-02-20 02:45:15.725131 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-20 02:45:15.725139 | orchestrator | Friday 20 February 2026 02:45:13 +0000 (0:00:00.331) 0:01:01.588 ******* 2026-02-20 02:45:15.725148 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'}) 2026-02-20 02:45:15.725156 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'}) 2026-02-20 02:45:15.725164 | orchestrator | 2026-02-20 02:45:15.725172 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-20 02:45:15.725180 | orchestrator | Friday 20 February 2026 02:45:14 +0000 (0:00:01.389) 0:01:02.978 ******* 2026-02-20 02:45:15.725188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:15.725196 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:15.725204 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.725213 | orchestrator | 2026-02-20 02:45:15.725221 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-20 02:45:15.725228 | orchestrator | Friday 20 February 2026 02:45:14 +0000 (0:00:00.151) 0:01:03.130 ******* 2026-02-20 02:45:15.725235 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.725242 | orchestrator | 2026-02-20 02:45:15.725249 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-20 02:45:15.725262 | orchestrator | Friday 20 February 2026 02:45:14 +0000 (0:00:00.141) 0:01:03.271 ******* 2026-02-20 02:45:15.725270 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:15.725277 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:15.725284 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.725291 | orchestrator | 2026-02-20 02:45:15.725299 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-20 02:45:15.725306 | orchestrator | Friday 20 February 2026 02:45:14 +0000 (0:00:00.160) 0:01:03.431 ******* 2026-02-20 02:45:15.725313 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.725320 | orchestrator | 2026-02-20 02:45:15.725327 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-20 02:45:15.725335 | orchestrator | Friday 20 February 2026 02:45:14 +0000 (0:00:00.138) 0:01:03.569 ******* 2026-02-20 02:45:15.725342 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:15.725349 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:15.725356 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.725363 | orchestrator | 2026-02-20 02:45:15.725370 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-20 02:45:15.725378 | orchestrator | Friday 20 February 2026 02:45:15 +0000 (0:00:00.152) 0:01:03.721 ******* 2026-02-20 02:45:15.725385 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.725392 | orchestrator | 2026-02-20 02:45:15.725399 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-20 02:45:15.725406 | orchestrator | Friday 20 February 2026 02:45:15 +0000 (0:00:00.139) 0:01:03.861 ******* 2026-02-20 02:45:15.725414 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:15.725421 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:15.725428 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:15.725435 | orchestrator | 2026-02-20 02:45:15.725442 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-20 02:45:15.725449 | orchestrator | Friday 20 February 2026 02:45:15 +0000 (0:00:00.148) 0:01:04.010 ******* 2026-02-20 02:45:15.725457 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:45:15.725464 | orchestrator | 2026-02-20 02:45:15.725471 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-20 02:45:15.725478 | orchestrator | Friday 20 February 2026 02:45:15 +0000 (0:00:00.143) 0:01:04.153 ******* 2026-02-20 02:45:15.725494 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:21.981799 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:21.981911 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.981927 | orchestrator | 2026-02-20 02:45:21.981940 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-20 02:45:21.981952 | orchestrator | Friday 20 February 2026 02:45:15 +0000 (0:00:00.150) 0:01:04.304 ******* 2026-02-20 02:45:21.981964 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:21.981997 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:21.982009 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.982084 | orchestrator | 2026-02-20 02:45:21.982119 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-20 02:45:21.982131 | orchestrator | Friday 20 February 2026 02:45:15 +0000 (0:00:00.160) 0:01:04.465 ******* 2026-02-20 02:45:21.982142 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:21.982153 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:21.982174 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.982185 | orchestrator | 2026-02-20 02:45:21.982196 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-20 02:45:21.982207 | orchestrator | Friday 20 February 2026 02:45:16 +0000 (0:00:00.342) 0:01:04.808 ******* 2026-02-20 02:45:21.982218 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.982229 | orchestrator | 2026-02-20 02:45:21.982240 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-20 02:45:21.982252 | orchestrator | Friday 20 February 2026 02:45:16 +0000 (0:00:00.142) 0:01:04.950 ******* 2026-02-20 02:45:21.982263 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.982273 | orchestrator | 2026-02-20 02:45:21.982284 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-20 02:45:21.982295 | orchestrator | Friday 20 February 2026 02:45:16 +0000 (0:00:00.141) 0:01:05.092 ******* 2026-02-20 02:45:21.982306 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.982316 | orchestrator | 2026-02-20 02:45:21.982327 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-20 02:45:21.982340 | orchestrator | Friday 20 February 2026 02:45:16 +0000 (0:00:00.137) 0:01:05.229 ******* 2026-02-20 02:45:21.982353 | orchestrator | ok: [testbed-node-5] => { 2026-02-20 02:45:21.982366 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-20 02:45:21.982378 | orchestrator | } 2026-02-20 02:45:21.982390 | orchestrator | 2026-02-20 02:45:21.982402 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-20 02:45:21.982415 | orchestrator | Friday 20 February 2026 02:45:16 +0000 (0:00:00.147) 0:01:05.376 ******* 2026-02-20 02:45:21.982427 | orchestrator | ok: [testbed-node-5] => { 2026-02-20 02:45:21.982439 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-20 02:45:21.982451 | orchestrator | } 2026-02-20 02:45:21.982463 | orchestrator | 2026-02-20 02:45:21.982475 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-20 02:45:21.982487 | orchestrator | Friday 20 February 2026 02:45:16 +0000 (0:00:00.148) 0:01:05.525 ******* 2026-02-20 02:45:21.982499 | orchestrator | ok: [testbed-node-5] => { 2026-02-20 02:45:21.982512 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-20 02:45:21.982524 | orchestrator | } 2026-02-20 02:45:21.982536 | orchestrator | 2026-02-20 02:45:21.982548 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-20 02:45:21.982560 | orchestrator | Friday 20 February 2026 02:45:17 +0000 (0:00:00.149) 0:01:05.674 ******* 2026-02-20 02:45:21.982572 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:45:21.982584 | orchestrator | 2026-02-20 02:45:21.982596 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-20 02:45:21.982608 | orchestrator | Friday 20 February 2026 02:45:17 +0000 (0:00:00.515) 0:01:06.190 ******* 2026-02-20 02:45:21.982620 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:45:21.982632 | orchestrator | 2026-02-20 02:45:21.982644 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-20 02:45:21.982656 | orchestrator | Friday 20 February 2026 02:45:18 +0000 (0:00:00.507) 0:01:06.697 ******* 2026-02-20 02:45:21.982678 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:45:21.982690 | orchestrator | 2026-02-20 02:45:21.982701 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-20 02:45:21.982712 | orchestrator | Friday 20 February 2026 02:45:18 +0000 (0:00:00.536) 0:01:07.233 ******* 2026-02-20 02:45:21.982722 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:45:21.982733 | orchestrator | 2026-02-20 02:45:21.982744 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-20 02:45:21.982755 | orchestrator | Friday 20 February 2026 02:45:18 +0000 (0:00:00.164) 0:01:07.398 ******* 2026-02-20 02:45:21.982765 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.982776 | orchestrator | 2026-02-20 02:45:21.982787 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-20 02:45:21.982798 | orchestrator | Friday 20 February 2026 02:45:18 +0000 (0:00:00.116) 0:01:07.515 ******* 2026-02-20 02:45:21.982809 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.982819 | orchestrator | 2026-02-20 02:45:21.982830 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-20 02:45:21.982841 | orchestrator | Friday 20 February 2026 02:45:19 +0000 (0:00:00.311) 0:01:07.826 ******* 2026-02-20 02:45:21.982852 | orchestrator | ok: [testbed-node-5] => { 2026-02-20 02:45:21.982863 | orchestrator |  "vgs_report": { 2026-02-20 02:45:21.982888 | orchestrator |  "vg": [] 2026-02-20 02:45:21.982942 | orchestrator |  } 2026-02-20 02:45:21.982955 | orchestrator | } 2026-02-20 02:45:21.982966 | orchestrator | 2026-02-20 02:45:21.982977 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-20 02:45:21.982988 | orchestrator | Friday 20 February 2026 02:45:19 +0000 (0:00:00.140) 0:01:07.967 ******* 2026-02-20 02:45:21.982999 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.983010 | orchestrator | 2026-02-20 02:45:21.983020 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-20 02:45:21.983031 | orchestrator | Friday 20 February 2026 02:45:19 +0000 (0:00:00.133) 0:01:08.100 ******* 2026-02-20 02:45:21.983042 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.983053 | orchestrator | 2026-02-20 02:45:21.983064 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-20 02:45:21.983075 | orchestrator | Friday 20 February 2026 02:45:19 +0000 (0:00:00.134) 0:01:08.235 ******* 2026-02-20 02:45:21.983085 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.983114 | orchestrator | 2026-02-20 02:45:21.983125 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-20 02:45:21.983136 | orchestrator | Friday 20 February 2026 02:45:19 +0000 (0:00:00.132) 0:01:08.367 ******* 2026-02-20 02:45:21.983147 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.983157 | orchestrator | 2026-02-20 02:45:21.983168 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-20 02:45:21.983179 | orchestrator | Friday 20 February 2026 02:45:19 +0000 (0:00:00.136) 0:01:08.503 ******* 2026-02-20 02:45:21.983190 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.983200 | orchestrator | 2026-02-20 02:45:21.983211 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-20 02:45:21.983222 | orchestrator | Friday 20 February 2026 02:45:20 +0000 (0:00:00.139) 0:01:08.643 ******* 2026-02-20 02:45:21.983233 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.983243 | orchestrator | 2026-02-20 02:45:21.983254 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-20 02:45:21.983265 | orchestrator | Friday 20 February 2026 02:45:20 +0000 (0:00:00.139) 0:01:08.783 ******* 2026-02-20 02:45:21.983276 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.983287 | orchestrator | 2026-02-20 02:45:21.983297 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-20 02:45:21.983308 | orchestrator | Friday 20 February 2026 02:45:20 +0000 (0:00:00.136) 0:01:08.919 ******* 2026-02-20 02:45:21.983319 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.983337 | orchestrator | 2026-02-20 02:45:21.983348 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-20 02:45:21.983359 | orchestrator | Friday 20 February 2026 02:45:20 +0000 (0:00:00.143) 0:01:09.063 ******* 2026-02-20 02:45:21.983370 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.983380 | orchestrator | 2026-02-20 02:45:21.983391 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-20 02:45:21.983402 | orchestrator | Friday 20 February 2026 02:45:20 +0000 (0:00:00.130) 0:01:09.194 ******* 2026-02-20 02:45:21.983413 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.983424 | orchestrator | 2026-02-20 02:45:21.983434 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-20 02:45:21.983445 | orchestrator | Friday 20 February 2026 02:45:20 +0000 (0:00:00.137) 0:01:09.331 ******* 2026-02-20 02:45:21.983456 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.983466 | orchestrator | 2026-02-20 02:45:21.983477 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-20 02:45:21.983488 | orchestrator | Friday 20 February 2026 02:45:21 +0000 (0:00:00.338) 0:01:09.670 ******* 2026-02-20 02:45:21.983499 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.983510 | orchestrator | 2026-02-20 02:45:21.983521 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-20 02:45:21.983532 | orchestrator | Friday 20 February 2026 02:45:21 +0000 (0:00:00.135) 0:01:09.806 ******* 2026-02-20 02:45:21.983543 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.983553 | orchestrator | 2026-02-20 02:45:21.983564 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-20 02:45:21.983575 | orchestrator | Friday 20 February 2026 02:45:21 +0000 (0:00:00.140) 0:01:09.947 ******* 2026-02-20 02:45:21.983586 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.983596 | orchestrator | 2026-02-20 02:45:21.983607 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-20 02:45:21.983618 | orchestrator | Friday 20 February 2026 02:45:21 +0000 (0:00:00.142) 0:01:10.089 ******* 2026-02-20 02:45:21.983629 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:21.983640 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:21.983651 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.983662 | orchestrator | 2026-02-20 02:45:21.983673 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-20 02:45:21.983684 | orchestrator | Friday 20 February 2026 02:45:21 +0000 (0:00:00.157) 0:01:10.246 ******* 2026-02-20 02:45:21.983695 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:21.983706 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:21.983717 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:21.983727 | orchestrator | 2026-02-20 02:45:21.983738 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-20 02:45:21.983749 | orchestrator | Friday 20 February 2026 02:45:21 +0000 (0:00:00.155) 0:01:10.402 ******* 2026-02-20 02:45:21.983773 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:24.976855 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:24.976946 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:24.976959 | orchestrator | 2026-02-20 02:45:24.976969 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-20 02:45:24.977002 | orchestrator | Friday 20 February 2026 02:45:21 +0000 (0:00:00.158) 0:01:10.560 ******* 2026-02-20 02:45:24.977011 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:24.977020 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:24.977029 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:24.977038 | orchestrator | 2026-02-20 02:45:24.977047 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-20 02:45:24.977056 | orchestrator | Friday 20 February 2026 02:45:22 +0000 (0:00:00.163) 0:01:10.723 ******* 2026-02-20 02:45:24.977064 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:24.977073 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:24.977082 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:24.977090 | orchestrator | 2026-02-20 02:45:24.977133 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-20 02:45:24.977148 | orchestrator | Friday 20 February 2026 02:45:22 +0000 (0:00:00.169) 0:01:10.893 ******* 2026-02-20 02:45:24.977164 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:24.977179 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:24.977194 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:24.977204 | orchestrator | 2026-02-20 02:45:24.977213 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-20 02:45:24.977221 | orchestrator | Friday 20 February 2026 02:45:22 +0000 (0:00:00.146) 0:01:11.039 ******* 2026-02-20 02:45:24.977230 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:24.977238 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:24.977247 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:24.977256 | orchestrator | 2026-02-20 02:45:24.977264 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-20 02:45:24.977273 | orchestrator | Friday 20 February 2026 02:45:22 +0000 (0:00:00.151) 0:01:11.191 ******* 2026-02-20 02:45:24.977283 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:24.977297 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:24.977319 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:24.977335 | orchestrator | 2026-02-20 02:45:24.977349 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-20 02:45:24.977364 | orchestrator | Friday 20 February 2026 02:45:22 +0000 (0:00:00.146) 0:01:11.337 ******* 2026-02-20 02:45:24.977380 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:45:24.977396 | orchestrator | 2026-02-20 02:45:24.977412 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-20 02:45:24.977422 | orchestrator | Friday 20 February 2026 02:45:23 +0000 (0:00:00.698) 0:01:12.036 ******* 2026-02-20 02:45:24.977433 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:45:24.977443 | orchestrator | 2026-02-20 02:45:24.977462 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-20 02:45:24.977472 | orchestrator | Friday 20 February 2026 02:45:24 +0000 (0:00:00.564) 0:01:12.601 ******* 2026-02-20 02:45:24.977482 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:45:24.977491 | orchestrator | 2026-02-20 02:45:24.977501 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-20 02:45:24.977511 | orchestrator | Friday 20 February 2026 02:45:24 +0000 (0:00:00.135) 0:01:12.736 ******* 2026-02-20 02:45:24.977586 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'vg_name': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'}) 2026-02-20 02:45:24.977598 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'vg_name': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'}) 2026-02-20 02:45:24.977608 | orchestrator | 2026-02-20 02:45:24.977631 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-20 02:45:24.977641 | orchestrator | Friday 20 February 2026 02:45:24 +0000 (0:00:00.193) 0:01:12.930 ******* 2026-02-20 02:45:24.977666 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:24.977675 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:24.977685 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:24.977694 | orchestrator | 2026-02-20 02:45:24.977702 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-20 02:45:24.977711 | orchestrator | Friday 20 February 2026 02:45:24 +0000 (0:00:00.155) 0:01:13.086 ******* 2026-02-20 02:45:24.977720 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:24.977729 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:24.977738 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:24.977748 | orchestrator | 2026-02-20 02:45:24.977756 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-20 02:45:24.977764 | orchestrator | Friday 20 February 2026 02:45:24 +0000 (0:00:00.154) 0:01:13.240 ******* 2026-02-20 02:45:24.977772 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 02:45:24.977779 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 02:45:24.977787 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:24.977795 | orchestrator | 2026-02-20 02:45:24.977803 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-20 02:45:24.977811 | orchestrator | Friday 20 February 2026 02:45:24 +0000 (0:00:00.143) 0:01:13.384 ******* 2026-02-20 02:45:24.977819 | orchestrator | ok: [testbed-node-5] => { 2026-02-20 02:45:24.977827 | orchestrator |  "lvm_report": { 2026-02-20 02:45:24.977835 | orchestrator |  "lv": [ 2026-02-20 02:45:24.977843 | orchestrator |  { 2026-02-20 02:45:24.977851 | orchestrator |  "lv_name": "osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2", 2026-02-20 02:45:24.977859 | orchestrator |  "vg_name": "ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2" 2026-02-20 02:45:24.977867 | orchestrator |  }, 2026-02-20 02:45:24.977875 | orchestrator |  { 2026-02-20 02:45:24.977883 | orchestrator |  "lv_name": "osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae", 2026-02-20 02:45:24.977890 | orchestrator |  "vg_name": "ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae" 2026-02-20 02:45:24.977898 | orchestrator |  } 2026-02-20 02:45:24.977917 | orchestrator |  ], 2026-02-20 02:45:24.977925 | orchestrator |  "pv": [ 2026-02-20 02:45:24.977932 | orchestrator |  { 2026-02-20 02:45:24.977940 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-20 02:45:24.977948 | orchestrator |  "vg_name": "ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae" 2026-02-20 02:45:24.977956 | orchestrator |  }, 2026-02-20 02:45:24.977964 | orchestrator |  { 2026-02-20 02:45:24.977972 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-20 02:45:24.977980 | orchestrator |  "vg_name": "ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2" 2026-02-20 02:45:24.977987 | orchestrator |  } 2026-02-20 02:45:24.977995 | orchestrator |  ] 2026-02-20 02:45:24.978003 | orchestrator |  } 2026-02-20 02:45:24.978011 | orchestrator | } 2026-02-20 02:45:24.978072 | orchestrator | 2026-02-20 02:45:24.978081 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:45:24.978089 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-20 02:45:24.978118 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-20 02:45:24.978127 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-20 02:45:24.978135 | orchestrator | 2026-02-20 02:45:24.978143 | orchestrator | 2026-02-20 02:45:24.978151 | orchestrator | 2026-02-20 02:45:24.978159 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:45:24.978166 | orchestrator | Friday 20 February 2026 02:45:24 +0000 (0:00:00.144) 0:01:13.528 ******* 2026-02-20 02:45:24.978174 | orchestrator | =============================================================================== 2026-02-20 02:45:24.978182 | orchestrator | Create block VGs -------------------------------------------------------- 5.75s 2026-02-20 02:45:24.978190 | orchestrator | Create block LVs -------------------------------------------------------- 4.20s 2026-02-20 02:45:24.978198 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.80s 2026-02-20 02:45:24.978206 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.71s 2026-02-20 02:45:24.978213 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.63s 2026-02-20 02:45:24.978221 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.59s 2026-02-20 02:45:24.978233 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.59s 2026-02-20 02:45:24.978241 | orchestrator | Add known partitions to the list of available block devices ------------- 1.26s 2026-02-20 02:45:24.978255 | orchestrator | Add known links to the list of available block devices ------------------ 1.26s 2026-02-20 02:45:25.295767 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.12s 2026-02-20 02:45:25.295855 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2026-02-20 02:45:25.295866 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2026-02-20 02:45:25.295875 | orchestrator | Print LVM report data --------------------------------------------------- 0.75s 2026-02-20 02:45:25.295885 | orchestrator | Calculate VG sizes (with buffer) ---------------------------------------- 0.75s 2026-02-20 02:45:25.295894 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-02-20 02:45:25.295903 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-02-20 02:45:25.295911 | orchestrator | Print 'Create block VGs' ------------------------------------------------ 0.70s 2026-02-20 02:45:25.295920 | orchestrator | Get initial list of available block devices ----------------------------- 0.70s 2026-02-20 02:45:25.295929 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.69s 2026-02-20 02:45:25.295938 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.68s 2026-02-20 02:45:37.549632 | orchestrator | 2026-02-20 02:45:37 | INFO  | Task 9d42085b-95ac-4b58-a4c2-b60931af0731 (facts) was prepared for execution. 2026-02-20 02:45:37.549743 | orchestrator | 2026-02-20 02:45:37 | INFO  | It takes a moment until task 9d42085b-95ac-4b58-a4c2-b60931af0731 (facts) has been started and output is visible here. 2026-02-20 02:45:50.061923 | orchestrator | 2026-02-20 02:45:50.062095 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-20 02:45:50.062153 | orchestrator | 2026-02-20 02:45:50.062168 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-20 02:45:50.062180 | orchestrator | Friday 20 February 2026 02:45:41 +0000 (0:00:00.199) 0:00:00.199 ******* 2026-02-20 02:45:50.062191 | orchestrator | ok: [testbed-manager] 2026-02-20 02:45:50.062203 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:45:50.062214 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:45:50.062226 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:45:50.062248 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:45:50.062270 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:45:50.062282 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:45:50.062293 | orchestrator | 2026-02-20 02:45:50.062305 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-20 02:45:50.062316 | orchestrator | Friday 20 February 2026 02:45:42 +0000 (0:00:01.020) 0:00:01.219 ******* 2026-02-20 02:45:50.062327 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:45:50.062339 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:45:50.062350 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:45:50.062361 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:45:50.062372 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:45:50.062383 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:50.062394 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:50.062405 | orchestrator | 2026-02-20 02:45:50.062416 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-20 02:45:50.062427 | orchestrator | 2026-02-20 02:45:50.062438 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-20 02:45:50.062449 | orchestrator | Friday 20 February 2026 02:45:43 +0000 (0:00:01.097) 0:00:02.317 ******* 2026-02-20 02:45:50.062461 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:45:50.062474 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:45:50.062486 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:45:50.062499 | orchestrator | ok: [testbed-manager] 2026-02-20 02:45:50.062511 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:45:50.062524 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:45:50.062536 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:45:50.062549 | orchestrator | 2026-02-20 02:45:50.062561 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-20 02:45:50.062573 | orchestrator | 2026-02-20 02:45:50.062586 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-20 02:45:50.062599 | orchestrator | Friday 20 February 2026 02:45:49 +0000 (0:00:05.279) 0:00:07.597 ******* 2026-02-20 02:45:50.062612 | orchestrator | skipping: [testbed-manager] 2026-02-20 02:45:50.062624 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:45:50.062637 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:45:50.062650 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:45:50.062662 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:45:50.062675 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:45:50.062688 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:45:50.062701 | orchestrator | 2026-02-20 02:45:50.062714 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:45:50.062727 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:45:50.062741 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:45:50.062780 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:45:50.062794 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:45:50.062808 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:45:50.062835 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:45:50.062847 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 02:45:50.062858 | orchestrator | 2026-02-20 02:45:50.062869 | orchestrator | 2026-02-20 02:45:50.062880 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:45:50.062891 | orchestrator | Friday 20 February 2026 02:45:49 +0000 (0:00:00.567) 0:00:08.164 ******* 2026-02-20 02:45:50.062902 | orchestrator | =============================================================================== 2026-02-20 02:45:50.062913 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.28s 2026-02-20 02:45:50.062924 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2026-02-20 02:45:50.062935 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.02s 2026-02-20 02:45:50.062946 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2026-02-20 02:45:52.323460 | orchestrator | 2026-02-20 02:45:52 | INFO  | Task 45fa7352-5806-4252-8225-639e4bfb309c (ceph) was prepared for execution. 2026-02-20 02:45:52.323574 | orchestrator | 2026-02-20 02:45:52 | INFO  | It takes a moment until task 45fa7352-5806-4252-8225-639e4bfb309c (ceph) has been started and output is visible here. 2026-02-20 02:46:09.549777 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-20 02:46:09.549894 | orchestrator | 2.16.14 2026-02-20 02:46:09.549909 | orchestrator | 2026-02-20 02:46:09.549922 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-20 02:46:09.549934 | orchestrator | 2026-02-20 02:46:09.549980 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 02:46:09.549991 | orchestrator | Friday 20 February 2026 02:45:57 +0000 (0:00:00.634) 0:00:00.634 ******* 2026-02-20 02:46:09.550004 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:46:09.550075 | orchestrator | 2026-02-20 02:46:09.550088 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 02:46:09.550099 | orchestrator | Friday 20 February 2026 02:45:58 +0000 (0:00:01.205) 0:00:01.839 ******* 2026-02-20 02:46:09.550111 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:09.550122 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:09.550177 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:09.550190 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:46:09.550201 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:46:09.550212 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:46:09.550223 | orchestrator | 2026-02-20 02:46:09.550234 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 02:46:09.550245 | orchestrator | Friday 20 February 2026 02:45:59 +0000 (0:00:01.273) 0:00:03.112 ******* 2026-02-20 02:46:09.550256 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:09.550267 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:09.550278 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:09.550289 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:46:09.550299 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:46:09.550310 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:46:09.550347 | orchestrator | 2026-02-20 02:46:09.550362 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 02:46:09.550376 | orchestrator | Friday 20 February 2026 02:46:00 +0000 (0:00:00.637) 0:00:03.750 ******* 2026-02-20 02:46:09.550388 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:09.550401 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:09.550413 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:09.550425 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:46:09.550437 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:46:09.550450 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:46:09.550462 | orchestrator | 2026-02-20 02:46:09.550474 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 02:46:09.550487 | orchestrator | Friday 20 February 2026 02:46:01 +0000 (0:00:00.842) 0:00:04.592 ******* 2026-02-20 02:46:09.550499 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:09.550511 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:09.550524 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:09.550536 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:46:09.550548 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:46:09.550561 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:46:09.550573 | orchestrator | 2026-02-20 02:46:09.550586 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 02:46:09.550599 | orchestrator | Friday 20 February 2026 02:46:01 +0000 (0:00:00.627) 0:00:05.220 ******* 2026-02-20 02:46:09.550611 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:09.550623 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:09.550635 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:09.550648 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:46:09.550661 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:46:09.550673 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:46:09.550683 | orchestrator | 2026-02-20 02:46:09.550694 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 02:46:09.550705 | orchestrator | Friday 20 February 2026 02:46:02 +0000 (0:00:00.531) 0:00:05.752 ******* 2026-02-20 02:46:09.550716 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:09.550726 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:09.550737 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:09.550748 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:46:09.550758 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:46:09.550769 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:46:09.550779 | orchestrator | 2026-02-20 02:46:09.550790 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 02:46:09.550801 | orchestrator | Friday 20 February 2026 02:46:02 +0000 (0:00:00.779) 0:00:06.532 ******* 2026-02-20 02:46:09.550812 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:09.550823 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:09.550834 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:09.550859 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:09.550870 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:09.550881 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:09.550891 | orchestrator | 2026-02-20 02:46:09.550902 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 02:46:09.550913 | orchestrator | Friday 20 February 2026 02:46:03 +0000 (0:00:00.596) 0:00:07.128 ******* 2026-02-20 02:46:09.550924 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:09.550935 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:09.550946 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:09.550956 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:46:09.550967 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:46:09.550977 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:46:09.550988 | orchestrator | 2026-02-20 02:46:09.550999 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 02:46:09.551010 | orchestrator | Friday 20 February 2026 02:46:04 +0000 (0:00:00.759) 0:00:07.888 ******* 2026-02-20 02:46:09.551021 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 02:46:09.551039 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 02:46:09.551050 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 02:46:09.551060 | orchestrator | 2026-02-20 02:46:09.551071 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 02:46:09.551082 | orchestrator | Friday 20 February 2026 02:46:05 +0000 (0:00:00.665) 0:00:08.554 ******* 2026-02-20 02:46:09.551093 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:09.551103 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:09.551114 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:09.551199 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:46:09.551213 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:46:09.551224 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:46:09.551234 | orchestrator | 2026-02-20 02:46:09.551245 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 02:46:09.551256 | orchestrator | Friday 20 February 2026 02:46:05 +0000 (0:00:00.737) 0:00:09.291 ******* 2026-02-20 02:46:09.551267 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 02:46:09.551278 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 02:46:09.551289 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 02:46:09.551300 | orchestrator | 2026-02-20 02:46:09.551311 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 02:46:09.551321 | orchestrator | Friday 20 February 2026 02:46:08 +0000 (0:00:02.452) 0:00:11.743 ******* 2026-02-20 02:46:09.551332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-20 02:46:09.551344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-20 02:46:09.551354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-20 02:46:09.551365 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:09.551375 | orchestrator | 2026-02-20 02:46:09.551386 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 02:46:09.551397 | orchestrator | Friday 20 February 2026 02:46:08 +0000 (0:00:00.410) 0:00:12.154 ******* 2026-02-20 02:46:09.551409 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 02:46:09.551423 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 02:46:09.551434 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 02:46:09.551445 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:09.551456 | orchestrator | 2026-02-20 02:46:09.551467 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 02:46:09.551478 | orchestrator | Friday 20 February 2026 02:46:09 +0000 (0:00:00.602) 0:00:12.756 ******* 2026-02-20 02:46:09.551490 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:09.551504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:09.551528 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:09.551540 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:09.551551 | orchestrator | 2026-02-20 02:46:09.551562 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 02:46:09.551573 | orchestrator | Friday 20 February 2026 02:46:09 +0000 (0:00:00.155) 0:00:12.912 ******* 2026-02-20 02:46:09.551595 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 02:46:06.686853', 'end': '2026-02-20 02:46:06.730764', 'delta': '0:00:00.043911', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 02:46:19.024416 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 02:46:07.245475', 'end': '2026-02-20 02:46:07.288235', 'delta': '0:00:00.042760', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 02:46:19.024555 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 02:46:07.828202', 'end': '2026-02-20 02:46:07.872405', 'delta': '0:00:00.044203', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 02:46:19.024601 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:19.024617 | orchestrator | 2026-02-20 02:46:19.024630 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 02:46:19.024642 | orchestrator | Friday 20 February 2026 02:46:09 +0000 (0:00:00.174) 0:00:13.086 ******* 2026-02-20 02:46:19.024653 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:19.024665 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:19.024676 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:19.024687 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:46:19.024698 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:46:19.024708 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:46:19.024719 | orchestrator | 2026-02-20 02:46:19.024758 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 02:46:19.024769 | orchestrator | Friday 20 February 2026 02:46:10 +0000 (0:00:00.726) 0:00:13.813 ******* 2026-02-20 02:46:19.024781 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 02:46:19.024792 | orchestrator | 2026-02-20 02:46:19.024803 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 02:46:19.024813 | orchestrator | Friday 20 February 2026 02:46:11 +0000 (0:00:00.830) 0:00:14.643 ******* 2026-02-20 02:46:19.024824 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:19.024835 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:19.024846 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:19.024856 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:19.024867 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:19.024878 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:19.024889 | orchestrator | 2026-02-20 02:46:19.024900 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 02:46:19.024912 | orchestrator | Friday 20 February 2026 02:46:11 +0000 (0:00:00.764) 0:00:15.408 ******* 2026-02-20 02:46:19.024925 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:19.024938 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:19.024950 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:19.024963 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:19.024992 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:19.025005 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:19.025017 | orchestrator | 2026-02-20 02:46:19.025030 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 02:46:19.025043 | orchestrator | Friday 20 February 2026 02:46:13 +0000 (0:00:01.169) 0:00:16.578 ******* 2026-02-20 02:46:19.025055 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:19.025069 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:19.025088 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:19.025107 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:19.025126 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:19.025176 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:19.025198 | orchestrator | 2026-02-20 02:46:19.025217 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 02:46:19.025234 | orchestrator | Friday 20 February 2026 02:46:13 +0000 (0:00:00.593) 0:00:17.171 ******* 2026-02-20 02:46:19.025247 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:19.025260 | orchestrator | 2026-02-20 02:46:19.025273 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 02:46:19.025285 | orchestrator | Friday 20 February 2026 02:46:13 +0000 (0:00:00.127) 0:00:17.299 ******* 2026-02-20 02:46:19.025298 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:19.025310 | orchestrator | 2026-02-20 02:46:19.025321 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 02:46:19.025332 | orchestrator | Friday 20 February 2026 02:46:13 +0000 (0:00:00.221) 0:00:17.521 ******* 2026-02-20 02:46:19.025343 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:19.025353 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:19.025365 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:19.025375 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:19.025386 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:19.025397 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:19.025408 | orchestrator | 2026-02-20 02:46:19.025439 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 02:46:19.025450 | orchestrator | Friday 20 February 2026 02:46:14 +0000 (0:00:00.761) 0:00:18.282 ******* 2026-02-20 02:46:19.025461 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:19.025471 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:19.025482 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:19.025493 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:19.025503 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:19.025526 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:19.025537 | orchestrator | 2026-02-20 02:46:19.025547 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 02:46:19.025559 | orchestrator | Friday 20 February 2026 02:46:15 +0000 (0:00:00.596) 0:00:18.878 ******* 2026-02-20 02:46:19.025569 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:19.025580 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:19.025591 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:19.025601 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:19.025612 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:19.025623 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:19.025633 | orchestrator | 2026-02-20 02:46:19.025644 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 02:46:19.025655 | orchestrator | Friday 20 February 2026 02:46:16 +0000 (0:00:00.798) 0:00:19.677 ******* 2026-02-20 02:46:19.025666 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:19.025676 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:19.025687 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:19.025698 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:19.025708 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:19.025719 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:19.025730 | orchestrator | 2026-02-20 02:46:19.025741 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 02:46:19.025752 | orchestrator | Friday 20 February 2026 02:46:16 +0000 (0:00:00.588) 0:00:20.265 ******* 2026-02-20 02:46:19.025762 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:19.025773 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:19.025784 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:19.025794 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:19.025805 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:19.025816 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:19.025826 | orchestrator | 2026-02-20 02:46:19.025837 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 02:46:19.025848 | orchestrator | Friday 20 February 2026 02:46:17 +0000 (0:00:00.779) 0:00:21.044 ******* 2026-02-20 02:46:19.025859 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:19.025870 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:19.025880 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:19.025891 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:19.025902 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:19.025912 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:19.025923 | orchestrator | 2026-02-20 02:46:19.025934 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 02:46:19.025945 | orchestrator | Friday 20 February 2026 02:46:18 +0000 (0:00:00.613) 0:00:21.657 ******* 2026-02-20 02:46:19.025956 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:19.025967 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:19.025977 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:19.025988 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:19.025999 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:19.026010 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:19.026082 | orchestrator | 2026-02-20 02:46:19.026094 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 02:46:19.026105 | orchestrator | Friday 20 February 2026 02:46:18 +0000 (0:00:00.785) 0:00:22.443 ******* 2026-02-20 02:46:19.026126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f', 'dm-uuid-LVM-N05Drt313VJO8ej73Med2uhl7Q3NfCOLjUHptgJks7soG8vnX2SubfjS5bfpvFW6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.026181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2', 'dm-uuid-LVM-rogr1DxRDGs07Ii1eVUD20MBBAvjpXWjQ80pE2aubUFxb6wQZQZD6n80uN1wncJx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.026230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.121955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.122091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.122104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.122113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.122121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.122129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.122201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.122256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.122269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8hqq24-gM8B-HGCg-NRBz-x5vV-O3o1-WzQB2w', 'scsi-0QEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737', 'scsi-SQEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.122279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-22rEKG-WFfn-O0Qj-YL45-EVdC-t8yv-UVN2TG', 'scsi-0QEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2', 'scsi-SQEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.122293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef', 'dm-uuid-LVM-eZPdyLEJUA7yM8UG04kzUJPVXCx8J9VmGmbUf4SnTTCksjLBnYIEkOOeXJAJtO0T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.122309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25', 'scsi-SQEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.122325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd', 'dm-uuid-LVM-uad2V0OpLGdzdU3eEHWgkaxlNp1nXa7g5o0axnC3QdSAlB8AkjlVWSH5X44uoXlU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.291445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.291547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.291565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.291578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.291590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.291642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.291654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.291667 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:19.291681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.291692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.291762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.291792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UBLLvV-8QQ0-xzoQ-2mnT-QJVN-4rdp-m3Ln3u', 'scsi-0QEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6', 'scsi-SQEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.291806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2YdZI3-6CQC-uR2v-c72I-I2Jf-rZdP-m7eIYn', 'scsi-0QEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289', 'scsi-SQEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.291827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca', 'scsi-SQEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.418006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.418214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae', 'dm-uuid-LVM-5ANCf8I6gapO5QY5OQELwuYZqJbHngb8nyP1EwPXTNj49fiX6eCZwsotaWM2Mv9b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.418233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2', 'dm-uuid-LVM-M2C8S8UVlhXUKOUHVYtq26o4q1qe3SqLeECzduiKtPWWIbJabB9IOMLcCGA61b8F'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.418272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.418309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.418322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.418333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.418344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.418374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.418387 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:19.418400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.418411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.418431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.418454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tuZx2P-aY2U-pmBQ-dfdb-DpNW-3dKP-AWDCQ4', 'scsi-0QEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57', 'scsi-SQEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.418475 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4WM9YE-fUvA-MoYX-Wgng-L1JX-iz7g-FbCOv0', 'scsi-0QEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9', 'scsi-SQEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.574279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8', 'scsi-SQEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.574417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.574433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.574460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.574470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.574480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.574489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.574498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.574524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.574533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.574557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.574569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.574578 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:19.574588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.574598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.574613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.700097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.700260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.700288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.700309 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:19.700354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.700391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.700442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.700498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.700522 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:19.700539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.700558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.700570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.700581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.700592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.700604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.700616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.700644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:46:19.988974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part1', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part14', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part15', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part16', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.989086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:46:19.989102 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:19.989114 | orchestrator | 2026-02-20 02:46:19.989125 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 02:46:19.989136 | orchestrator | Friday 20 February 2026 02:46:19 +0000 (0:00:00.885) 0:00:23.328 ******* 2026-02-20 02:46:19.989218 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f', 'dm-uuid-LVM-N05Drt313VJO8ej73Med2uhl7Q3NfCOLjUHptgJks7soG8vnX2SubfjS5bfpvFW6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:19.989269 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2', 'dm-uuid-LVM-rogr1DxRDGs07Ii1eVUD20MBBAvjpXWjQ80pE2aubUFxb6wQZQZD6n80uN1wncJx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:19.989282 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:19.989301 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:19.989312 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:19.989323 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:19.989333 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:19.989350 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:19.989367 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.274135 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.274321 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.274365 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef', 'dm-uuid-LVM-eZPdyLEJUA7yM8UG04kzUJPVXCx8J9VmGmbUf4SnTTCksjLBnYIEkOOeXJAJtO0T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.274401 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8hqq24-gM8B-HGCg-NRBz-x5vV-O3o1-WzQB2w', 'scsi-0QEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737', 'scsi-SQEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.274421 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd', 'dm-uuid-LVM-uad2V0OpLGdzdU3eEHWgkaxlNp1nXa7g5o0axnC3QdSAlB8AkjlVWSH5X44uoXlU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.274433 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-22rEKG-WFfn-O0Qj-YL45-EVdC-t8yv-UVN2TG', 'scsi-0QEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2', 'scsi-SQEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.274452 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.274464 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25', 'scsi-SQEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.274483 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.381445 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.381544 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.381560 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.381607 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.381631 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.381652 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.381695 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.381731 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.381768 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UBLLvV-8QQ0-xzoQ-2mnT-QJVN-4rdp-m3Ln3u', 'scsi-0QEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6', 'scsi-SQEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.381792 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2YdZI3-6CQC-uR2v-c72I-I2Jf-rZdP-m7eIYn', 'scsi-0QEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289', 'scsi-SQEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.467803 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca', 'scsi-SQEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.467903 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.467934 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae', 'dm-uuid-LVM-5ANCf8I6gapO5QY5OQELwuYZqJbHngb8nyP1EwPXTNj49fiX6eCZwsotaWM2Mv9b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.467945 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2', 'dm-uuid-LVM-M2C8S8UVlhXUKOUHVYtq26o4q1qe3SqLeECzduiKtPWWIbJabB9IOMLcCGA61b8F'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.467955 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.468001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.468013 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.468028 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.468037 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.468047 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:20.468058 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.468067 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.468076 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.468101 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.545669 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tuZx2P-aY2U-pmBQ-dfdb-DpNW-3dKP-AWDCQ4', 'scsi-0QEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57', 'scsi-SQEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.545783 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4WM9YE-fUvA-MoYX-Wgng-L1JX-iz7g-FbCOv0', 'scsi-0QEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9', 'scsi-SQEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.545807 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.545859 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.545881 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8', 'scsi-SQEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.545921 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.545942 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.546170 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.546207 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.546242 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.546263 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.546303 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.687689 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.687849 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.687872 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:20.687885 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.687919 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.687933 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.687946 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.687965 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.687989 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.688000 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.688008 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.688030 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.895050 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.895238 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:20.895257 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:20.895267 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:20.895279 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.895292 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.895303 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.895313 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.895339 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.895394 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.895407 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.895417 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.895435 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part1', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part14', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part15', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part16', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:20.895462 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:46:32.005068 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:32.005262 | orchestrator | 2026-02-20 02:46:32.005284 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 02:46:32.005298 | orchestrator | Friday 20 February 2026 02:46:20 +0000 (0:00:01.104) 0:00:24.433 ******* 2026-02-20 02:46:32.005309 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:32.005321 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:32.005332 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:32.005343 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:46:32.005354 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:46:32.005364 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:46:32.005375 | orchestrator | 2026-02-20 02:46:32.005387 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 02:46:32.005398 | orchestrator | Friday 20 February 2026 02:46:21 +0000 (0:00:00.982) 0:00:25.415 ******* 2026-02-20 02:46:32.005409 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:32.005420 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:32.005431 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:32.005442 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:46:32.005453 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:46:32.005463 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:46:32.005474 | orchestrator | 2026-02-20 02:46:32.005485 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 02:46:32.005496 | orchestrator | Friday 20 February 2026 02:46:22 +0000 (0:00:00.752) 0:00:26.168 ******* 2026-02-20 02:46:32.005507 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:32.005518 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:32.005529 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:32.005540 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:32.005550 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:32.005562 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:32.005575 | orchestrator | 2026-02-20 02:46:32.005588 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 02:46:32.005601 | orchestrator | Friday 20 February 2026 02:46:23 +0000 (0:00:00.563) 0:00:26.731 ******* 2026-02-20 02:46:32.005614 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:32.005627 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:32.005641 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:32.005654 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:32.005666 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:32.005702 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:32.005714 | orchestrator | 2026-02-20 02:46:32.005725 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 02:46:32.005736 | orchestrator | Friday 20 February 2026 02:46:23 +0000 (0:00:00.737) 0:00:27.469 ******* 2026-02-20 02:46:32.005747 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:32.005757 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:32.005815 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:32.005829 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:32.005840 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:32.005851 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:32.005862 | orchestrator | 2026-02-20 02:46:32.005872 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 02:46:32.005883 | orchestrator | Friday 20 February 2026 02:46:24 +0000 (0:00:00.589) 0:00:28.059 ******* 2026-02-20 02:46:32.005894 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:32.005905 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:32.005915 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:32.005926 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:32.005937 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:32.005947 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:32.005958 | orchestrator | 2026-02-20 02:46:32.005969 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 02:46:32.005980 | orchestrator | Friday 20 February 2026 02:46:25 +0000 (0:00:00.792) 0:00:28.851 ******* 2026-02-20 02:46:32.005990 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-20 02:46:32.006002 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-20 02:46:32.006013 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-20 02:46:32.006082 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-20 02:46:32.006094 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-20 02:46:32.006104 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-20 02:46:32.006115 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-20 02:46:32.006126 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 02:46:32.006137 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-20 02:46:32.006184 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-20 02:46:32.006196 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-20 02:46:32.006207 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-20 02:46:32.006217 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-20 02:46:32.006228 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-20 02:46:32.006250 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-20 02:46:32.006261 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-20 02:46:32.006272 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-20 02:46:32.006283 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-20 02:46:32.006294 | orchestrator | 2026-02-20 02:46:32.006305 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 02:46:32.006316 | orchestrator | Friday 20 February 2026 02:46:26 +0000 (0:00:01.644) 0:00:30.496 ******* 2026-02-20 02:46:32.006327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-20 02:46:32.006339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-20 02:46:32.006350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-20 02:46:32.006361 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:32.006372 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-20 02:46:32.006383 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-20 02:46:32.006394 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-20 02:46:32.006425 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:32.006448 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-20 02:46:32.006459 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-20 02:46:32.006470 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-20 02:46:32.006480 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:32.006491 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 02:46:32.006502 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 02:46:32.006513 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 02:46:32.006524 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:32.006535 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-20 02:46:32.006545 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-20 02:46:32.006556 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-20 02:46:32.006567 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:32.006578 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-20 02:46:32.006588 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-20 02:46:32.006599 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-20 02:46:32.006610 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:32.006621 | orchestrator | 2026-02-20 02:46:32.006632 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 02:46:32.006643 | orchestrator | Friday 20 February 2026 02:46:27 +0000 (0:00:00.867) 0:00:31.363 ******* 2026-02-20 02:46:32.006654 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:32.006665 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:32.006675 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:32.006687 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:46:32.006698 | orchestrator | 2026-02-20 02:46:32.006710 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 02:46:32.006723 | orchestrator | Friday 20 February 2026 02:46:28 +0000 (0:00:00.971) 0:00:32.335 ******* 2026-02-20 02:46:32.006734 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:32.006744 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:32.006755 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:32.006766 | orchestrator | 2026-02-20 02:46:32.006777 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 02:46:32.006788 | orchestrator | Friday 20 February 2026 02:46:29 +0000 (0:00:00.339) 0:00:32.675 ******* 2026-02-20 02:46:32.006798 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:32.006809 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:32.006820 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:32.006830 | orchestrator | 2026-02-20 02:46:32.006841 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 02:46:32.006852 | orchestrator | Friday 20 February 2026 02:46:29 +0000 (0:00:00.343) 0:00:33.018 ******* 2026-02-20 02:46:32.006863 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:32.006873 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:32.006884 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:32.006895 | orchestrator | 2026-02-20 02:46:32.006906 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 02:46:32.006916 | orchestrator | Friday 20 February 2026 02:46:29 +0000 (0:00:00.347) 0:00:33.365 ******* 2026-02-20 02:46:32.006927 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:32.006940 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:32.006959 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:32.006977 | orchestrator | 2026-02-20 02:46:32.007005 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 02:46:32.007027 | orchestrator | Friday 20 February 2026 02:46:30 +0000 (0:00:00.655) 0:00:34.020 ******* 2026-02-20 02:46:32.007056 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:46:32.007074 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:46:32.007091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:46:32.007108 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:32.007126 | orchestrator | 2026-02-20 02:46:32.007143 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 02:46:32.007217 | orchestrator | Friday 20 February 2026 02:46:30 +0000 (0:00:00.410) 0:00:34.430 ******* 2026-02-20 02:46:32.007239 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:46:32.007258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:46:32.007275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:46:32.007294 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:32.007305 | orchestrator | 2026-02-20 02:46:32.007316 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 02:46:32.007330 | orchestrator | Friday 20 February 2026 02:46:31 +0000 (0:00:00.389) 0:00:34.820 ******* 2026-02-20 02:46:32.007349 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:46:32.007367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:46:32.007385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:46:32.007403 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:32.007422 | orchestrator | 2026-02-20 02:46:32.007440 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 02:46:32.007459 | orchestrator | Friday 20 February 2026 02:46:31 +0000 (0:00:00.376) 0:00:35.197 ******* 2026-02-20 02:46:32.007479 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:32.007497 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:32.007514 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:32.007525 | orchestrator | 2026-02-20 02:46:32.007536 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 02:46:32.007558 | orchestrator | Friday 20 February 2026 02:46:31 +0000 (0:00:00.343) 0:00:35.540 ******* 2026-02-20 02:46:51.428056 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-20 02:46:51.428258 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-20 02:46:51.428292 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-20 02:46:51.428315 | orchestrator | 2026-02-20 02:46:51.428336 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 02:46:51.428359 | orchestrator | Friday 20 February 2026 02:46:32 +0000 (0:00:00.966) 0:00:36.507 ******* 2026-02-20 02:46:51.428378 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 02:46:51.428391 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 02:46:51.428402 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 02:46:51.428414 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-20 02:46:51.428425 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 02:46:51.428436 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 02:46:51.428447 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 02:46:51.428458 | orchestrator | 2026-02-20 02:46:51.428469 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 02:46:51.428480 | orchestrator | Friday 20 February 2026 02:46:33 +0000 (0:00:00.768) 0:00:37.275 ******* 2026-02-20 02:46:51.428490 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 02:46:51.428501 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 02:46:51.428512 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 02:46:51.428549 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-20 02:46:51.428563 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 02:46:51.428576 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 02:46:51.428589 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 02:46:51.428601 | orchestrator | 2026-02-20 02:46:51.428613 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 02:46:51.428625 | orchestrator | Friday 20 February 2026 02:46:35 +0000 (0:00:01.875) 0:00:39.151 ******* 2026-02-20 02:46:51.428639 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:46:51.428653 | orchestrator | 2026-02-20 02:46:51.428666 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 02:46:51.428678 | orchestrator | Friday 20 February 2026 02:46:36 +0000 (0:00:01.335) 0:00:40.487 ******* 2026-02-20 02:46:51.428691 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:46:51.428705 | orchestrator | 2026-02-20 02:46:51.428725 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 02:46:51.428743 | orchestrator | Friday 20 February 2026 02:46:38 +0000 (0:00:01.287) 0:00:41.774 ******* 2026-02-20 02:46:51.428759 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:51.428776 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:51.428794 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:51.428811 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:46:51.428823 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:46:51.428833 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:46:51.428844 | orchestrator | 2026-02-20 02:46:51.428855 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 02:46:51.428866 | orchestrator | Friday 20 February 2026 02:46:39 +0000 (0:00:01.175) 0:00:42.949 ******* 2026-02-20 02:46:51.428876 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:51.428887 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:51.428898 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:51.428908 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:51.428934 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:51.428945 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:51.428956 | orchestrator | 2026-02-20 02:46:51.428967 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 02:46:51.428978 | orchestrator | Friday 20 February 2026 02:46:40 +0000 (0:00:00.728) 0:00:43.677 ******* 2026-02-20 02:46:51.428988 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:51.428999 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:51.429009 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:51.429020 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:51.429030 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:51.429041 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:51.429052 | orchestrator | 2026-02-20 02:46:51.429062 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 02:46:51.429073 | orchestrator | Friday 20 February 2026 02:46:40 +0000 (0:00:00.865) 0:00:44.542 ******* 2026-02-20 02:46:51.429083 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:51.429112 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:51.429134 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:51.429145 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:51.429156 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:51.429166 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:51.429206 | orchestrator | 2026-02-20 02:46:51.429217 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 02:46:51.429228 | orchestrator | Friday 20 February 2026 02:46:41 +0000 (0:00:00.711) 0:00:45.254 ******* 2026-02-20 02:46:51.429250 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:51.429261 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:51.429292 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:51.429304 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:46:51.429315 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:46:51.429326 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:46:51.429336 | orchestrator | 2026-02-20 02:46:51.429347 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 02:46:51.429358 | orchestrator | Friday 20 February 2026 02:46:42 +0000 (0:00:01.277) 0:00:46.532 ******* 2026-02-20 02:46:51.429369 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:51.429380 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:51.429391 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:51.429402 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:51.429413 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:51.429424 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:51.429435 | orchestrator | 2026-02-20 02:46:51.429446 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 02:46:51.429457 | orchestrator | Friday 20 February 2026 02:46:43 +0000 (0:00:00.605) 0:00:47.137 ******* 2026-02-20 02:46:51.429467 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:51.429478 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:51.429489 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:51.429499 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:51.429510 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:51.429521 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:51.429532 | orchestrator | 2026-02-20 02:46:51.429543 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 02:46:51.429554 | orchestrator | Friday 20 February 2026 02:46:44 +0000 (0:00:00.782) 0:00:47.920 ******* 2026-02-20 02:46:51.429565 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:51.429576 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:51.429586 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:51.429597 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:46:51.429608 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:46:51.429619 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:46:51.429629 | orchestrator | 2026-02-20 02:46:51.429640 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 02:46:51.429651 | orchestrator | Friday 20 February 2026 02:46:45 +0000 (0:00:01.052) 0:00:48.972 ******* 2026-02-20 02:46:51.429662 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:51.429673 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:51.429683 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:51.429694 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:46:51.429704 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:46:51.429715 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:46:51.429726 | orchestrator | 2026-02-20 02:46:51.429737 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 02:46:51.429748 | orchestrator | Friday 20 February 2026 02:46:46 +0000 (0:00:01.342) 0:00:50.315 ******* 2026-02-20 02:46:51.429759 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:51.429770 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:51.429780 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:51.429791 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:51.429802 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:51.429813 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:51.429824 | orchestrator | 2026-02-20 02:46:51.429835 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 02:46:51.429846 | orchestrator | Friday 20 February 2026 02:46:47 +0000 (0:00:00.594) 0:00:50.909 ******* 2026-02-20 02:46:51.429857 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:51.429867 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:51.429878 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:51.429897 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:46:51.429908 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:46:51.429918 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:46:51.429929 | orchestrator | 2026-02-20 02:46:51.429940 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 02:46:51.429951 | orchestrator | Friday 20 February 2026 02:46:48 +0000 (0:00:00.903) 0:00:51.813 ******* 2026-02-20 02:46:51.429962 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:51.429972 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:51.429983 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:51.429994 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:51.430005 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:51.430079 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:51.430094 | orchestrator | 2026-02-20 02:46:51.430105 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 02:46:51.430116 | orchestrator | Friday 20 February 2026 02:46:48 +0000 (0:00:00.603) 0:00:52.417 ******* 2026-02-20 02:46:51.430127 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:51.430138 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:51.430148 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:51.430165 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:51.430224 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:51.430236 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:51.430247 | orchestrator | 2026-02-20 02:46:51.430258 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 02:46:51.430269 | orchestrator | Friday 20 February 2026 02:46:49 +0000 (0:00:00.879) 0:00:53.296 ******* 2026-02-20 02:46:51.430280 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:46:51.430290 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:46:51.430301 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:46:51.430312 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:51.430323 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:51.430334 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:51.430344 | orchestrator | 2026-02-20 02:46:51.430355 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 02:46:51.430366 | orchestrator | Friday 20 February 2026 02:46:50 +0000 (0:00:00.591) 0:00:53.888 ******* 2026-02-20 02:46:51.430377 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:51.430387 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:46:51.430398 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:46:51.430409 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:46:51.430419 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:46:51.430430 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:46:51.430441 | orchestrator | 2026-02-20 02:46:51.430452 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 02:46:51.430463 | orchestrator | Friday 20 February 2026 02:46:51 +0000 (0:00:00.799) 0:00:54.687 ******* 2026-02-20 02:46:51.430474 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:46:51.430493 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:08.597429 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:08.597560 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:08.597576 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:08.597588 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:08.597598 | orchestrator | 2026-02-20 02:48:08.597609 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 02:48:08.597620 | orchestrator | Friday 20 February 2026 02:46:51 +0000 (0:00:00.595) 0:00:55.283 ******* 2026-02-20 02:48:08.597630 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:08.597640 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:08.597650 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:08.597659 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:48:08.597670 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:48:08.597680 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:48:08.597690 | orchestrator | 2026-02-20 02:48:08.597700 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 02:48:08.597734 | orchestrator | Friday 20 February 2026 02:46:52 +0000 (0:00:00.832) 0:00:56.115 ******* 2026-02-20 02:48:08.597745 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:48:08.597755 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:48:08.597765 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:48:08.597774 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:48:08.597784 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:48:08.597799 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:48:08.597815 | orchestrator | 2026-02-20 02:48:08.597831 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 02:48:08.597847 | orchestrator | Friday 20 February 2026 02:46:53 +0000 (0:00:00.619) 0:00:56.735 ******* 2026-02-20 02:48:08.597863 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:48:08.597878 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:48:08.597895 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:48:08.597913 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:48:08.597930 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:48:08.597946 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:48:08.597963 | orchestrator | 2026-02-20 02:48:08.597980 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 02:48:08.597996 | orchestrator | Friday 20 February 2026 02:46:54 +0000 (0:00:01.251) 0:00:57.986 ******* 2026-02-20 02:48:08.598012 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:48:08.598106 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:48:08.598124 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:48:08.598139 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:48:08.598156 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:48:08.598172 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:48:08.598188 | orchestrator | 2026-02-20 02:48:08.598217 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 02:48:08.598233 | orchestrator | Friday 20 February 2026 02:46:56 +0000 (0:00:01.767) 0:00:59.754 ******* 2026-02-20 02:48:08.598248 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:48:08.598263 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:48:08.598279 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:48:08.598325 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:48:08.598342 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:48:08.598359 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:48:08.598377 | orchestrator | 2026-02-20 02:48:08.598395 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 02:48:08.598412 | orchestrator | Friday 20 February 2026 02:46:58 +0000 (0:00:02.211) 0:01:01.966 ******* 2026-02-20 02:48:08.598431 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:48:08.598450 | orchestrator | 2026-02-20 02:48:08.598466 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-20 02:48:08.598484 | orchestrator | Friday 20 February 2026 02:46:59 +0000 (0:00:01.440) 0:01:03.406 ******* 2026-02-20 02:48:08.598501 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:08.598518 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:08.598536 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:08.598554 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:08.598572 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:08.598589 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:08.598606 | orchestrator | 2026-02-20 02:48:08.598625 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-20 02:48:08.598643 | orchestrator | Friday 20 February 2026 02:47:00 +0000 (0:00:00.675) 0:01:04.082 ******* 2026-02-20 02:48:08.598661 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:08.598680 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:08.598717 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:08.598735 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:08.598770 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:08.598788 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:08.598804 | orchestrator | 2026-02-20 02:48:08.598821 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-20 02:48:08.598837 | orchestrator | Friday 20 February 2026 02:47:01 +0000 (0:00:00.836) 0:01:04.918 ******* 2026-02-20 02:48:08.598855 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 02:48:08.598873 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 02:48:08.598890 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 02:48:08.598907 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 02:48:08.598924 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 02:48:08.598942 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 02:48:08.598960 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 02:48:08.598978 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 02:48:08.598996 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 02:48:08.599041 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 02:48:08.599053 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 02:48:08.599062 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 02:48:08.599072 | orchestrator | 2026-02-20 02:48:08.599082 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-20 02:48:08.599091 | orchestrator | Friday 20 February 2026 02:47:02 +0000 (0:00:01.360) 0:01:06.279 ******* 2026-02-20 02:48:08.599101 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:48:08.599110 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:48:08.599120 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:48:08.599129 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:48:08.599138 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:48:08.599148 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:48:08.599157 | orchestrator | 2026-02-20 02:48:08.599166 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-20 02:48:08.599176 | orchestrator | Friday 20 February 2026 02:47:03 +0000 (0:00:01.200) 0:01:07.479 ******* 2026-02-20 02:48:08.599185 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:08.599195 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:08.599204 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:08.599214 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:08.599223 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:08.599232 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:08.599242 | orchestrator | 2026-02-20 02:48:08.599251 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-20 02:48:08.599261 | orchestrator | Friday 20 February 2026 02:47:04 +0000 (0:00:00.648) 0:01:08.128 ******* 2026-02-20 02:48:08.599270 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:08.599279 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:08.599330 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:08.599348 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:08.599365 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:08.599381 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:08.599396 | orchestrator | 2026-02-20 02:48:08.599406 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 02:48:08.599416 | orchestrator | Friday 20 February 2026 02:47:05 +0000 (0:00:00.791) 0:01:08.919 ******* 2026-02-20 02:48:08.599427 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:08.599439 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:08.599462 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:08.599473 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:08.599484 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:08.599495 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:08.599507 | orchestrator | 2026-02-20 02:48:08.599517 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 02:48:08.599529 | orchestrator | Friday 20 February 2026 02:47:06 +0000 (0:00:00.635) 0:01:09.554 ******* 2026-02-20 02:48:08.599540 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:48:08.599552 | orchestrator | 2026-02-20 02:48:08.599564 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-20 02:48:08.599574 | orchestrator | Friday 20 February 2026 02:47:07 +0000 (0:00:01.288) 0:01:10.843 ******* 2026-02-20 02:48:08.599587 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:48:08.599599 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:48:08.599610 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:48:08.599619 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:48:08.599629 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:48:08.599638 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:48:08.599648 | orchestrator | 2026-02-20 02:48:08.599657 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-20 02:48:08.599667 | orchestrator | Friday 20 February 2026 02:48:07 +0000 (0:01:00.543) 0:02:11.386 ******* 2026-02-20 02:48:08.599677 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 02:48:08.599687 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 02:48:08.599697 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 02:48:08.599706 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:08.599724 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 02:48:08.599734 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 02:48:08.599743 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 02:48:08.599753 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:08.599762 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 02:48:08.599772 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 02:48:08.599782 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 02:48:08.599791 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:08.599801 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 02:48:08.599810 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 02:48:08.599820 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 02:48:08.599829 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:08.599839 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 02:48:08.599848 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 02:48:08.599858 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 02:48:08.599876 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:31.121766 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 02:48:31.121909 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 02:48:31.121933 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 02:48:31.121951 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:31.121969 | orchestrator | 2026-02-20 02:48:31.121986 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-20 02:48:31.122097 | orchestrator | Friday 20 February 2026 02:48:08 +0000 (0:00:00.748) 0:02:12.135 ******* 2026-02-20 02:48:31.122136 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:31.122147 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:31.122157 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:31.122167 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:31.122176 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:31.122185 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:31.122195 | orchestrator | 2026-02-20 02:48:31.122205 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-20 02:48:31.122215 | orchestrator | Friday 20 February 2026 02:48:09 +0000 (0:00:00.764) 0:02:12.899 ******* 2026-02-20 02:48:31.122224 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:31.122233 | orchestrator | 2026-02-20 02:48:31.122243 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-20 02:48:31.122253 | orchestrator | Friday 20 February 2026 02:48:09 +0000 (0:00:00.143) 0:02:13.043 ******* 2026-02-20 02:48:31.122262 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:31.122272 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:31.122283 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:31.122294 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:31.122305 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:31.122315 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:31.122351 | orchestrator | 2026-02-20 02:48:31.122363 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-20 02:48:31.122374 | orchestrator | Friday 20 February 2026 02:48:10 +0000 (0:00:00.576) 0:02:13.619 ******* 2026-02-20 02:48:31.122385 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:31.122395 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:31.122406 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:31.122417 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:31.122427 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:31.122438 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:31.122448 | orchestrator | 2026-02-20 02:48:31.122459 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-20 02:48:31.122470 | orchestrator | Friday 20 February 2026 02:48:10 +0000 (0:00:00.782) 0:02:14.401 ******* 2026-02-20 02:48:31.122482 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:31.122493 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:31.122504 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:31.122514 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:31.122526 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:31.122537 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:31.122547 | orchestrator | 2026-02-20 02:48:31.122558 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 02:48:31.122569 | orchestrator | Friday 20 February 2026 02:48:11 +0000 (0:00:00.565) 0:02:14.966 ******* 2026-02-20 02:48:31.122580 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:48:31.122592 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:48:31.122603 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:48:31.122613 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:48:31.122622 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:48:31.122632 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:48:31.122641 | orchestrator | 2026-02-20 02:48:31.122651 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 02:48:31.122661 | orchestrator | Friday 20 February 2026 02:48:14 +0000 (0:00:03.328) 0:02:18.295 ******* 2026-02-20 02:48:31.122670 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:48:31.122680 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:48:31.122689 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:48:31.122699 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:48:31.122709 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:48:31.122718 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:48:31.122736 | orchestrator | 2026-02-20 02:48:31.122746 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 02:48:31.122756 | orchestrator | Friday 20 February 2026 02:48:15 +0000 (0:00:00.550) 0:02:18.845 ******* 2026-02-20 02:48:31.122781 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:48:31.122793 | orchestrator | 2026-02-20 02:48:31.122803 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-20 02:48:31.122812 | orchestrator | Friday 20 February 2026 02:48:16 +0000 (0:00:01.176) 0:02:20.022 ******* 2026-02-20 02:48:31.122822 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:31.122831 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:31.122841 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:31.122850 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:31.122860 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:31.122869 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:31.122879 | orchestrator | 2026-02-20 02:48:31.122888 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-20 02:48:31.122898 | orchestrator | Friday 20 February 2026 02:48:17 +0000 (0:00:00.779) 0:02:20.802 ******* 2026-02-20 02:48:31.122908 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:31.122917 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:31.122927 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:31.122936 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:31.122946 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:31.122955 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:31.122965 | orchestrator | 2026-02-20 02:48:31.122974 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-20 02:48:31.122984 | orchestrator | Friday 20 February 2026 02:48:17 +0000 (0:00:00.573) 0:02:21.376 ******* 2026-02-20 02:48:31.122994 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:31.123023 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:31.123034 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:31.123043 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:31.123053 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:31.123062 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:31.123072 | orchestrator | 2026-02-20 02:48:31.123082 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-20 02:48:31.123091 | orchestrator | Friday 20 February 2026 02:48:18 +0000 (0:00:00.794) 0:02:22.171 ******* 2026-02-20 02:48:31.123101 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:31.123110 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:31.123120 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:31.123129 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:31.123139 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:31.123148 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:31.123157 | orchestrator | 2026-02-20 02:48:31.123167 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-20 02:48:31.123177 | orchestrator | Friday 20 February 2026 02:48:19 +0000 (0:00:00.583) 0:02:22.755 ******* 2026-02-20 02:48:31.123186 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:31.123196 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:31.123205 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:31.123215 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:31.123224 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:31.123234 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:31.123243 | orchestrator | 2026-02-20 02:48:31.123253 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-20 02:48:31.123262 | orchestrator | Friday 20 February 2026 02:48:19 +0000 (0:00:00.787) 0:02:23.542 ******* 2026-02-20 02:48:31.123272 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:31.123281 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:31.123297 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:31.123306 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:31.123316 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:31.123354 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:31.123363 | orchestrator | 2026-02-20 02:48:31.123373 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-20 02:48:31.123383 | orchestrator | Friday 20 February 2026 02:48:20 +0000 (0:00:00.596) 0:02:24.139 ******* 2026-02-20 02:48:31.123392 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:31.123402 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:31.123411 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:31.123421 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:31.123430 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:31.123440 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:31.123449 | orchestrator | 2026-02-20 02:48:31.123459 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-20 02:48:31.123468 | orchestrator | Friday 20 February 2026 02:48:21 +0000 (0:00:00.779) 0:02:24.918 ******* 2026-02-20 02:48:31.123478 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:31.123487 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:31.123497 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:31.123506 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:31.123516 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:31.123525 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:31.123535 | orchestrator | 2026-02-20 02:48:31.123544 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-20 02:48:31.123554 | orchestrator | Friday 20 February 2026 02:48:21 +0000 (0:00:00.585) 0:02:25.503 ******* 2026-02-20 02:48:31.123564 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:48:31.123573 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:48:31.123583 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:48:31.123592 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:48:31.123602 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:48:31.123611 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:48:31.123621 | orchestrator | 2026-02-20 02:48:31.123631 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 02:48:31.123640 | orchestrator | Friday 20 February 2026 02:48:23 +0000 (0:00:01.236) 0:02:26.740 ******* 2026-02-20 02:48:31.123651 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:48:31.123662 | orchestrator | 2026-02-20 02:48:31.123672 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-20 02:48:31.123687 | orchestrator | Friday 20 February 2026 02:48:24 +0000 (0:00:01.174) 0:02:27.914 ******* 2026-02-20 02:48:31.123697 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-20 02:48:31.123707 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-20 02:48:31.123716 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-20 02:48:31.123726 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-20 02:48:31.123735 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-20 02:48:31.123745 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-20 02:48:31.123755 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-20 02:48:31.123764 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-20 02:48:31.123774 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-20 02:48:31.123783 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-20 02:48:31.123793 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-20 02:48:31.123802 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-20 02:48:31.123812 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-20 02:48:31.123821 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-20 02:48:31.123837 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-20 02:48:31.123847 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-20 02:48:31.123857 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-20 02:48:31.123873 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-20 02:48:36.034891 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-20 02:48:36.034993 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-20 02:48:36.035008 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-20 02:48:36.035019 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-20 02:48:36.035030 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-20 02:48:36.035040 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-20 02:48:36.035051 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-20 02:48:36.035061 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-20 02:48:36.035072 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-20 02:48:36.035082 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-20 02:48:36.035093 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-20 02:48:36.035103 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-20 02:48:36.035114 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-20 02:48:36.035124 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-20 02:48:36.035135 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-20 02:48:36.035146 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-20 02:48:36.035157 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-20 02:48:36.035167 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-20 02:48:36.035178 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-20 02:48:36.035188 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-20 02:48:36.035198 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-20 02:48:36.035209 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-20 02:48:36.035219 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-20 02:48:36.035230 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-20 02:48:36.035240 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-20 02:48:36.035251 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-20 02:48:36.035261 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 02:48:36.035272 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-20 02:48:36.035282 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-20 02:48:36.035293 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 02:48:36.035303 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 02:48:36.035313 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-20 02:48:36.035359 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 02:48:36.035381 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 02:48:36.035401 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-20 02:48:36.035418 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 02:48:36.035431 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 02:48:36.035444 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 02:48:36.035484 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 02:48:36.035497 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 02:48:36.035510 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 02:48:36.035522 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 02:48:36.035534 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 02:48:36.035562 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 02:48:36.035575 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 02:48:36.035587 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 02:48:36.035600 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 02:48:36.035613 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 02:48:36.035625 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 02:48:36.035637 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 02:48:36.035649 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 02:48:36.035661 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 02:48:36.035673 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 02:48:36.035686 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 02:48:36.035698 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 02:48:36.035710 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 02:48:36.035722 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 02:48:36.035735 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 02:48:36.035765 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 02:48:36.035777 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 02:48:36.035787 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 02:48:36.035798 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 02:48:36.035810 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-20 02:48:36.035821 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-20 02:48:36.035831 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 02:48:36.035842 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 02:48:36.035853 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-20 02:48:36.035863 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 02:48:36.035874 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-20 02:48:36.035885 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-20 02:48:36.035895 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-20 02:48:36.035906 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-20 02:48:36.035917 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-20 02:48:36.035927 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 02:48:36.035938 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-20 02:48:36.035949 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-20 02:48:36.035959 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-20 02:48:36.035970 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-20 02:48:36.035981 | orchestrator | 2026-02-20 02:48:36.036001 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 02:48:36.036013 | orchestrator | Friday 20 February 2026 02:48:31 +0000 (0:00:06.730) 0:02:34.645 ******* 2026-02-20 02:48:36.036024 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:36.036035 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:36.036045 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:36.036057 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:48:36.036069 | orchestrator | 2026-02-20 02:48:36.036080 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-20 02:48:36.036091 | orchestrator | Friday 20 February 2026 02:48:32 +0000 (0:00:00.965) 0:02:35.610 ******* 2026-02-20 02:48:36.036102 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 02:48:36.036114 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 02:48:36.036125 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 02:48:36.036136 | orchestrator | 2026-02-20 02:48:36.036147 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-20 02:48:36.036157 | orchestrator | Friday 20 February 2026 02:48:32 +0000 (0:00:00.682) 0:02:36.293 ******* 2026-02-20 02:48:36.036168 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 02:48:36.036179 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 02:48:36.036190 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 02:48:36.036201 | orchestrator | 2026-02-20 02:48:36.036217 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 02:48:36.036228 | orchestrator | Friday 20 February 2026 02:48:33 +0000 (0:00:01.197) 0:02:37.490 ******* 2026-02-20 02:48:36.036239 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:48:36.036250 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:48:36.036261 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:48:36.036271 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:36.036282 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:36.036293 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:36.036303 | orchestrator | 2026-02-20 02:48:36.036314 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 02:48:36.036373 | orchestrator | Friday 20 February 2026 02:48:34 +0000 (0:00:00.748) 0:02:38.239 ******* 2026-02-20 02:48:36.036386 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:48:36.036397 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:48:36.036408 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:48:36.036418 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:36.036429 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:36.036439 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:36.036450 | orchestrator | 2026-02-20 02:48:36.036461 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 02:48:36.036472 | orchestrator | Friday 20 February 2026 02:48:35 +0000 (0:00:00.567) 0:02:38.806 ******* 2026-02-20 02:48:36.036482 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:36.036493 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:36.036504 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:36.036515 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:36.036526 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:36.036536 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:36.036547 | orchestrator | 2026-02-20 02:48:36.036564 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 02:48:48.367314 | orchestrator | Friday 20 February 2026 02:48:36 +0000 (0:00:00.763) 0:02:39.570 ******* 2026-02-20 02:48:48.367500 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:48.367524 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:48.367543 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:48.367561 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:48.367579 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:48.367595 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:48.367613 | orchestrator | 2026-02-20 02:48:48.367631 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 02:48:48.367649 | orchestrator | Friday 20 February 2026 02:48:36 +0000 (0:00:00.580) 0:02:40.151 ******* 2026-02-20 02:48:48.367669 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:48.367688 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:48.367706 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:48.367725 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:48.367744 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:48.367762 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:48.367781 | orchestrator | 2026-02-20 02:48:48.367800 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 02:48:48.367821 | orchestrator | Friday 20 February 2026 02:48:37 +0000 (0:00:00.801) 0:02:40.952 ******* 2026-02-20 02:48:48.367841 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:48.367862 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:48.367883 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:48.367904 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:48.367926 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:48.367947 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:48.367968 | orchestrator | 2026-02-20 02:48:48.367987 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 02:48:48.368007 | orchestrator | Friday 20 February 2026 02:48:38 +0000 (0:00:00.634) 0:02:41.587 ******* 2026-02-20 02:48:48.368029 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:48.368048 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:48.368069 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:48.368090 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:48.368110 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:48.368129 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:48.368149 | orchestrator | 2026-02-20 02:48:48.368171 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 02:48:48.368193 | orchestrator | Friday 20 February 2026 02:48:38 +0000 (0:00:00.787) 0:02:42.375 ******* 2026-02-20 02:48:48.368215 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:48.368235 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:48.368254 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:48.368273 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:48.368292 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:48.368309 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:48.368327 | orchestrator | 2026-02-20 02:48:48.368395 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 02:48:48.368416 | orchestrator | Friday 20 February 2026 02:48:39 +0000 (0:00:00.563) 0:02:42.938 ******* 2026-02-20 02:48:48.368432 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:48.368448 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:48.368463 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:48.368479 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:48:48.368496 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:48:48.368511 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:48:48.368527 | orchestrator | 2026-02-20 02:48:48.368542 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 02:48:48.368557 | orchestrator | Friday 20 February 2026 02:48:42 +0000 (0:00:02.888) 0:02:45.827 ******* 2026-02-20 02:48:48.368604 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:48:48.368619 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:48:48.368634 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:48:48.368649 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:48.368664 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:48.368679 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:48.368693 | orchestrator | 2026-02-20 02:48:48.368708 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 02:48:48.368723 | orchestrator | Friday 20 February 2026 02:48:42 +0000 (0:00:00.573) 0:02:46.401 ******* 2026-02-20 02:48:48.368738 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:48:48.368770 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:48:48.368786 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:48:48.368801 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:48.368816 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:48.368830 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:48.368844 | orchestrator | 2026-02-20 02:48:48.368859 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 02:48:48.368874 | orchestrator | Friday 20 February 2026 02:48:43 +0000 (0:00:00.828) 0:02:47.229 ******* 2026-02-20 02:48:48.368889 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:48.368904 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:48.368920 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:48.368936 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:48.368952 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:48.368967 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:48.368982 | orchestrator | 2026-02-20 02:48:48.369000 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 02:48:48.369018 | orchestrator | Friday 20 February 2026 02:48:44 +0000 (0:00:00.575) 0:02:47.805 ******* 2026-02-20 02:48:48.369035 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 02:48:48.369054 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 02:48:48.369071 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 02:48:48.369087 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:48.369129 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:48.369144 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:48.369160 | orchestrator | 2026-02-20 02:48:48.369176 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 02:48:48.369193 | orchestrator | Friday 20 February 2026 02:48:45 +0000 (0:00:00.777) 0:02:48.582 ******* 2026-02-20 02:48:48.369213 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-20 02:48:48.369233 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-20 02:48:48.369249 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:48.369260 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-20 02:48:48.369270 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-20 02:48:48.369292 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:48.369302 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-20 02:48:48.369312 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-20 02:48:48.369322 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:48.369331 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:48.369368 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:48.369387 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:48.369403 | orchestrator | 2026-02-20 02:48:48.369420 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 02:48:48.369436 | orchestrator | Friday 20 February 2026 02:48:45 +0000 (0:00:00.615) 0:02:49.197 ******* 2026-02-20 02:48:48.369448 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:48.369457 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:48.369467 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:48.369476 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:48.369485 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:48.369495 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:48.369504 | orchestrator | 2026-02-20 02:48:48.369514 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 02:48:48.369523 | orchestrator | Friday 20 February 2026 02:48:46 +0000 (0:00:00.763) 0:02:49.961 ******* 2026-02-20 02:48:48.369541 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:48.369551 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:48.369560 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:48.369569 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:48.369579 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:48.369588 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:48.369598 | orchestrator | 2026-02-20 02:48:48.369607 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 02:48:48.369617 | orchestrator | Friday 20 February 2026 02:48:46 +0000 (0:00:00.556) 0:02:50.517 ******* 2026-02-20 02:48:48.369626 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:48.369636 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:48.369645 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:48.369654 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:48.369664 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:48.369673 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:48.369683 | orchestrator | 2026-02-20 02:48:48.369693 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 02:48:48.369702 | orchestrator | Friday 20 February 2026 02:48:47 +0000 (0:00:00.806) 0:02:51.324 ******* 2026-02-20 02:48:48.369711 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:48:48.369721 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:48:48.369730 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:48:48.369739 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:48:48.369749 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:48:48.369758 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:48:48.369768 | orchestrator | 2026-02-20 02:48:48.369777 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 02:48:48.369804 | orchestrator | Friday 20 February 2026 02:48:48 +0000 (0:00:00.578) 0:02:51.903 ******* 2026-02-20 02:49:04.904327 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:04.904495 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:49:04.904508 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:49:04.904516 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:04.904522 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:04.904529 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:04.904536 | orchestrator | 2026-02-20 02:49:04.904544 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 02:49:04.904552 | orchestrator | Friday 20 February 2026 02:48:49 +0000 (0:00:00.816) 0:02:52.719 ******* 2026-02-20 02:49:04.904558 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:49:04.904567 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:49:04.904573 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:04.904580 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:49:04.904586 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:04.904592 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:04.904598 | orchestrator | 2026-02-20 02:49:04.904605 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 02:49:04.904611 | orchestrator | Friday 20 February 2026 02:48:49 +0000 (0:00:00.798) 0:02:53.517 ******* 2026-02-20 02:49:04.904618 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:49:04.904625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:49:04.904632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:49:04.904638 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:04.904645 | orchestrator | 2026-02-20 02:49:04.904651 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 02:49:04.904657 | orchestrator | Friday 20 February 2026 02:48:50 +0000 (0:00:00.408) 0:02:53.926 ******* 2026-02-20 02:49:04.904664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:49:04.904670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:49:04.904677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:49:04.904684 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:04.904690 | orchestrator | 2026-02-20 02:49:04.904696 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 02:49:04.904702 | orchestrator | Friday 20 February 2026 02:48:50 +0000 (0:00:00.421) 0:02:54.348 ******* 2026-02-20 02:49:04.904709 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:49:04.904715 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:49:04.904721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:49:04.904727 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:04.904733 | orchestrator | 2026-02-20 02:49:04.904739 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 02:49:04.904745 | orchestrator | Friday 20 February 2026 02:48:51 +0000 (0:00:00.386) 0:02:54.735 ******* 2026-02-20 02:49:04.904751 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:49:04.904757 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:49:04.904762 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:49:04.904768 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:04.904775 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:04.904781 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:04.904787 | orchestrator | 2026-02-20 02:49:04.904794 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 02:49:04.904800 | orchestrator | Friday 20 February 2026 02:48:51 +0000 (0:00:00.622) 0:02:55.357 ******* 2026-02-20 02:49:04.904806 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-20 02:49:04.904813 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-20 02:49:04.904819 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-20 02:49:04.904825 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-20 02:49:04.904851 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:04.904857 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-20 02:49:04.904863 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:04.904869 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-20 02:49:04.904875 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:04.904881 | orchestrator | 2026-02-20 02:49:04.904888 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 02:49:04.904895 | orchestrator | Friday 20 February 2026 02:48:53 +0000 (0:00:01.668) 0:02:57.025 ******* 2026-02-20 02:49:04.904915 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:49:04.904922 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:49:04.904928 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:49:04.904934 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:49:04.904940 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:49:04.904946 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:49:04.904951 | orchestrator | 2026-02-20 02:49:04.904957 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-20 02:49:04.904963 | orchestrator | Friday 20 February 2026 02:48:55 +0000 (0:00:02.499) 0:02:59.524 ******* 2026-02-20 02:49:04.904969 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:49:04.904975 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:49:04.904981 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:49:04.904987 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:49:04.904993 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:49:04.904999 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:49:04.905005 | orchestrator | 2026-02-20 02:49:04.905011 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-20 02:49:04.905017 | orchestrator | Friday 20 February 2026 02:48:56 +0000 (0:00:00.944) 0:03:00.469 ******* 2026-02-20 02:49:04.905024 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:04.905030 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:49:04.905036 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:49:04.905044 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:49:04.905050 | orchestrator | 2026-02-20 02:49:04.905056 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-20 02:49:04.905063 | orchestrator | Friday 20 February 2026 02:48:57 +0000 (0:00:01.002) 0:03:01.471 ******* 2026-02-20 02:49:04.905068 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:04.905091 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:04.905097 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:04.905103 | orchestrator | 2026-02-20 02:49:04.905109 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-20 02:49:04.905115 | orchestrator | Friday 20 February 2026 02:48:58 +0000 (0:00:00.313) 0:03:01.785 ******* 2026-02-20 02:49:04.905120 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:49:04.905126 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:49:04.905132 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:49:04.905138 | orchestrator | 2026-02-20 02:49:04.905144 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-20 02:49:04.905150 | orchestrator | Friday 20 February 2026 02:48:59 +0000 (0:00:01.464) 0:03:03.250 ******* 2026-02-20 02:49:04.905156 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 02:49:04.905162 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 02:49:04.905168 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 02:49:04.905173 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:04.905179 | orchestrator | 2026-02-20 02:49:04.905185 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-20 02:49:04.905190 | orchestrator | Friday 20 February 2026 02:49:00 +0000 (0:00:00.636) 0:03:03.886 ******* 2026-02-20 02:49:04.905197 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:04.905203 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:04.905217 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:04.905223 | orchestrator | 2026-02-20 02:49:04.905229 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-20 02:49:04.905235 | orchestrator | Friday 20 February 2026 02:49:00 +0000 (0:00:00.319) 0:03:04.205 ******* 2026-02-20 02:49:04.905241 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:04.905248 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:04.905254 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:04.905260 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:49:04.905266 | orchestrator | 2026-02-20 02:49:04.905271 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-20 02:49:04.905277 | orchestrator | Friday 20 February 2026 02:49:01 +0000 (0:00:01.003) 0:03:05.208 ******* 2026-02-20 02:49:04.905283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:49:04.905289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:49:04.905296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:49:04.905301 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:04.905307 | orchestrator | 2026-02-20 02:49:04.905313 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-20 02:49:04.905319 | orchestrator | Friday 20 February 2026 02:49:02 +0000 (0:00:00.387) 0:03:05.596 ******* 2026-02-20 02:49:04.905325 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:04.905330 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:49:04.905336 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:49:04.905341 | orchestrator | 2026-02-20 02:49:04.905347 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-20 02:49:04.905353 | orchestrator | Friday 20 February 2026 02:49:02 +0000 (0:00:00.315) 0:03:05.912 ******* 2026-02-20 02:49:04.905360 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:04.905392 | orchestrator | 2026-02-20 02:49:04.905399 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-20 02:49:04.905406 | orchestrator | Friday 20 February 2026 02:49:02 +0000 (0:00:00.223) 0:03:06.135 ******* 2026-02-20 02:49:04.905412 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:04.905419 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:49:04.905426 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:49:04.905432 | orchestrator | 2026-02-20 02:49:04.905439 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-20 02:49:04.905446 | orchestrator | Friday 20 February 2026 02:49:02 +0000 (0:00:00.322) 0:03:06.457 ******* 2026-02-20 02:49:04.905453 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:04.905459 | orchestrator | 2026-02-20 02:49:04.905465 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-20 02:49:04.905477 | orchestrator | Friday 20 February 2026 02:49:03 +0000 (0:00:00.601) 0:03:07.059 ******* 2026-02-20 02:49:04.905484 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:04.905489 | orchestrator | 2026-02-20 02:49:04.905495 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-20 02:49:04.905501 | orchestrator | Friday 20 February 2026 02:49:03 +0000 (0:00:00.227) 0:03:07.286 ******* 2026-02-20 02:49:04.905507 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:04.905513 | orchestrator | 2026-02-20 02:49:04.905520 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-20 02:49:04.905526 | orchestrator | Friday 20 February 2026 02:49:03 +0000 (0:00:00.136) 0:03:07.423 ******* 2026-02-20 02:49:04.905533 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:04.905539 | orchestrator | 2026-02-20 02:49:04.905545 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-20 02:49:04.905551 | orchestrator | Friday 20 February 2026 02:49:04 +0000 (0:00:00.222) 0:03:07.645 ******* 2026-02-20 02:49:04.905557 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:04.905571 | orchestrator | 2026-02-20 02:49:04.905577 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-20 02:49:04.905584 | orchestrator | Friday 20 February 2026 02:49:04 +0000 (0:00:00.214) 0:03:07.860 ******* 2026-02-20 02:49:04.905590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:49:04.905597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:49:04.905605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:49:04.905612 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:04.905618 | orchestrator | 2026-02-20 02:49:04.905624 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-20 02:49:04.905630 | orchestrator | Friday 20 February 2026 02:49:04 +0000 (0:00:00.405) 0:03:08.266 ******* 2026-02-20 02:49:04.905645 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:22.371187 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:49:22.371294 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:49:22.371307 | orchestrator | 2026-02-20 02:49:22.371317 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-20 02:49:22.371328 | orchestrator | Friday 20 February 2026 02:49:05 +0000 (0:00:00.282) 0:03:08.549 ******* 2026-02-20 02:49:22.371337 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:22.371346 | orchestrator | 2026-02-20 02:49:22.371356 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-20 02:49:22.371366 | orchestrator | Friday 20 February 2026 02:49:05 +0000 (0:00:00.220) 0:03:08.769 ******* 2026-02-20 02:49:22.371375 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:22.371385 | orchestrator | 2026-02-20 02:49:22.371442 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-20 02:49:22.371452 | orchestrator | Friday 20 February 2026 02:49:05 +0000 (0:00:00.219) 0:03:08.988 ******* 2026-02-20 02:49:22.371461 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:22.371471 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:22.371480 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:22.371490 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:49:22.371499 | orchestrator | 2026-02-20 02:49:22.371509 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-20 02:49:22.371519 | orchestrator | Friday 20 February 2026 02:49:06 +0000 (0:00:01.017) 0:03:10.006 ******* 2026-02-20 02:49:22.371529 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:49:22.371539 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:49:22.371548 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:49:22.371557 | orchestrator | 2026-02-20 02:49:22.371566 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-20 02:49:22.371575 | orchestrator | Friday 20 February 2026 02:49:06 +0000 (0:00:00.314) 0:03:10.321 ******* 2026-02-20 02:49:22.371584 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:49:22.371593 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:49:22.371603 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:49:22.371612 | orchestrator | 2026-02-20 02:49:22.371621 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-20 02:49:22.371630 | orchestrator | Friday 20 February 2026 02:49:08 +0000 (0:00:01.467) 0:03:11.788 ******* 2026-02-20 02:49:22.371639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:49:22.371649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:49:22.371658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:49:22.371668 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:22.371677 | orchestrator | 2026-02-20 02:49:22.371686 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-20 02:49:22.371695 | orchestrator | Friday 20 February 2026 02:49:08 +0000 (0:00:00.623) 0:03:12.412 ******* 2026-02-20 02:49:22.371704 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:49:22.371713 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:49:22.371746 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:49:22.371759 | orchestrator | 2026-02-20 02:49:22.371768 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-20 02:49:22.371776 | orchestrator | Friday 20 February 2026 02:49:09 +0000 (0:00:00.328) 0:03:12.740 ******* 2026-02-20 02:49:22.371785 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:22.371794 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:22.371802 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:22.371811 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:49:22.371820 | orchestrator | 2026-02-20 02:49:22.371828 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-20 02:49:22.371837 | orchestrator | Friday 20 February 2026 02:49:10 +0000 (0:00:00.983) 0:03:13.724 ******* 2026-02-20 02:49:22.371846 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:49:22.371856 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:49:22.371865 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:49:22.371875 | orchestrator | 2026-02-20 02:49:22.371884 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-20 02:49:22.371902 | orchestrator | Friday 20 February 2026 02:49:10 +0000 (0:00:00.346) 0:03:14.071 ******* 2026-02-20 02:49:22.371909 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:49:22.371915 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:49:22.371921 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:49:22.371928 | orchestrator | 2026-02-20 02:49:22.371934 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-20 02:49:22.371940 | orchestrator | Friday 20 February 2026 02:49:11 +0000 (0:00:01.206) 0:03:15.277 ******* 2026-02-20 02:49:22.371946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:49:22.371953 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:49:22.371959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:49:22.371965 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:22.371971 | orchestrator | 2026-02-20 02:49:22.371977 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-20 02:49:22.371984 | orchestrator | Friday 20 February 2026 02:49:12 +0000 (0:00:00.777) 0:03:16.055 ******* 2026-02-20 02:49:22.371989 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:49:22.371995 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:49:22.372000 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:49:22.372006 | orchestrator | 2026-02-20 02:49:22.372011 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-20 02:49:22.372016 | orchestrator | Friday 20 February 2026 02:49:12 +0000 (0:00:00.486) 0:03:16.542 ******* 2026-02-20 02:49:22.372022 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:22.372027 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:49:22.372032 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:49:22.372038 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:22.372043 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:22.372048 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:22.372054 | orchestrator | 2026-02-20 02:49:22.372072 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-20 02:49:22.372078 | orchestrator | Friday 20 February 2026 02:49:13 +0000 (0:00:00.569) 0:03:17.111 ******* 2026-02-20 02:49:22.372084 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:49:22.372089 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:49:22.372094 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:49:22.372100 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:49:22.372105 | orchestrator | 2026-02-20 02:49:22.372111 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-20 02:49:22.372116 | orchestrator | Friday 20 February 2026 02:49:14 +0000 (0:00:01.005) 0:03:18.116 ******* 2026-02-20 02:49:22.372128 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:22.372134 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:22.372139 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:22.372145 | orchestrator | 2026-02-20 02:49:22.372150 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-20 02:49:22.372156 | orchestrator | Friday 20 February 2026 02:49:14 +0000 (0:00:00.322) 0:03:18.439 ******* 2026-02-20 02:49:22.372161 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:49:22.372166 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:49:22.372172 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:49:22.372177 | orchestrator | 2026-02-20 02:49:22.372183 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-20 02:49:22.372188 | orchestrator | Friday 20 February 2026 02:49:16 +0000 (0:00:01.216) 0:03:19.655 ******* 2026-02-20 02:49:22.372194 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 02:49:22.372199 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 02:49:22.372205 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 02:49:22.372210 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:22.372215 | orchestrator | 2026-02-20 02:49:22.372221 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-20 02:49:22.372226 | orchestrator | Friday 20 February 2026 02:49:16 +0000 (0:00:00.861) 0:03:20.516 ******* 2026-02-20 02:49:22.372232 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:22.372237 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:22.372243 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:22.372248 | orchestrator | 2026-02-20 02:49:22.372253 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-20 02:49:22.372259 | orchestrator | 2026-02-20 02:49:22.372264 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 02:49:22.372270 | orchestrator | Friday 20 February 2026 02:49:17 +0000 (0:00:00.787) 0:03:21.304 ******* 2026-02-20 02:49:22.372276 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:49:22.372283 | orchestrator | 2026-02-20 02:49:22.372288 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 02:49:22.372294 | orchestrator | Friday 20 February 2026 02:49:18 +0000 (0:00:00.706) 0:03:22.011 ******* 2026-02-20 02:49:22.372299 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:49:22.372305 | orchestrator | 2026-02-20 02:49:22.372310 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 02:49:22.372315 | orchestrator | Friday 20 February 2026 02:49:18 +0000 (0:00:00.523) 0:03:22.535 ******* 2026-02-20 02:49:22.372321 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:22.372326 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:22.372332 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:22.372337 | orchestrator | 2026-02-20 02:49:22.372342 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 02:49:22.372348 | orchestrator | Friday 20 February 2026 02:49:19 +0000 (0:00:00.710) 0:03:23.245 ******* 2026-02-20 02:49:22.372353 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:22.372359 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:22.372364 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:22.372369 | orchestrator | 2026-02-20 02:49:22.372375 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 02:49:22.372383 | orchestrator | Friday 20 February 2026 02:49:20 +0000 (0:00:00.507) 0:03:23.752 ******* 2026-02-20 02:49:22.372409 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:22.372418 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:22.372423 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:22.372428 | orchestrator | 2026-02-20 02:49:22.372434 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 02:49:22.372444 | orchestrator | Friday 20 February 2026 02:49:20 +0000 (0:00:00.316) 0:03:24.069 ******* 2026-02-20 02:49:22.372449 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:22.372455 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:22.372460 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:22.372465 | orchestrator | 2026-02-20 02:49:22.372470 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 02:49:22.372476 | orchestrator | Friday 20 February 2026 02:49:20 +0000 (0:00:00.305) 0:03:24.374 ******* 2026-02-20 02:49:22.372481 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:22.372487 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:22.372492 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:22.372497 | orchestrator | 2026-02-20 02:49:22.372503 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 02:49:22.372508 | orchestrator | Friday 20 February 2026 02:49:21 +0000 (0:00:00.719) 0:03:25.093 ******* 2026-02-20 02:49:22.372513 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:22.372519 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:22.372524 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:22.372529 | orchestrator | 2026-02-20 02:49:22.372535 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 02:49:22.372540 | orchestrator | Friday 20 February 2026 02:49:22 +0000 (0:00:00.490) 0:03:25.583 ******* 2026-02-20 02:49:22.372545 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:22.372551 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:22.372560 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:43.417016 | orchestrator | 2026-02-20 02:49:43.417123 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 02:49:43.417137 | orchestrator | Friday 20 February 2026 02:49:22 +0000 (0:00:00.325) 0:03:25.909 ******* 2026-02-20 02:49:43.417146 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:43.417157 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:43.417165 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:43.417174 | orchestrator | 2026-02-20 02:49:43.417184 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 02:49:43.417193 | orchestrator | Friday 20 February 2026 02:49:23 +0000 (0:00:00.723) 0:03:26.633 ******* 2026-02-20 02:49:43.417201 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:43.417210 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:43.417219 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:43.417227 | orchestrator | 2026-02-20 02:49:43.417236 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 02:49:43.417245 | orchestrator | Friday 20 February 2026 02:49:23 +0000 (0:00:00.713) 0:03:27.346 ******* 2026-02-20 02:49:43.417254 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:43.417280 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:43.417289 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:43.417298 | orchestrator | 2026-02-20 02:49:43.417307 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 02:49:43.417316 | orchestrator | Friday 20 February 2026 02:49:24 +0000 (0:00:00.495) 0:03:27.842 ******* 2026-02-20 02:49:43.417325 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:43.417334 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:43.417342 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:43.417351 | orchestrator | 2026-02-20 02:49:43.417360 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 02:49:43.417369 | orchestrator | Friday 20 February 2026 02:49:24 +0000 (0:00:00.362) 0:03:28.205 ******* 2026-02-20 02:49:43.417377 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:43.417386 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:43.417395 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:43.417403 | orchestrator | 2026-02-20 02:49:43.417412 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 02:49:43.417463 | orchestrator | Friday 20 February 2026 02:49:24 +0000 (0:00:00.301) 0:03:28.506 ******* 2026-02-20 02:49:43.417494 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:43.417504 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:43.417512 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:43.417521 | orchestrator | 2026-02-20 02:49:43.417529 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 02:49:43.417538 | orchestrator | Friday 20 February 2026 02:49:25 +0000 (0:00:00.303) 0:03:28.810 ******* 2026-02-20 02:49:43.417547 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:43.417555 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:43.417565 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:43.417575 | orchestrator | 2026-02-20 02:49:43.417585 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 02:49:43.417596 | orchestrator | Friday 20 February 2026 02:49:25 +0000 (0:00:00.493) 0:03:29.303 ******* 2026-02-20 02:49:43.417606 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:43.417615 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:43.417625 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:43.417635 | orchestrator | 2026-02-20 02:49:43.417644 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 02:49:43.417654 | orchestrator | Friday 20 February 2026 02:49:26 +0000 (0:00:00.301) 0:03:29.604 ******* 2026-02-20 02:49:43.417664 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:43.417674 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:49:43.417684 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:49:43.417694 | orchestrator | 2026-02-20 02:49:43.417704 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 02:49:43.417714 | orchestrator | Friday 20 February 2026 02:49:26 +0000 (0:00:00.294) 0:03:29.899 ******* 2026-02-20 02:49:43.417724 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:43.417734 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:43.417743 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:43.417754 | orchestrator | 2026-02-20 02:49:43.417764 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 02:49:43.417788 | orchestrator | Friday 20 February 2026 02:49:26 +0000 (0:00:00.321) 0:03:30.220 ******* 2026-02-20 02:49:43.417799 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:43.417809 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:43.417818 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:43.417828 | orchestrator | 2026-02-20 02:49:43.417838 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 02:49:43.417848 | orchestrator | Friday 20 February 2026 02:49:27 +0000 (0:00:00.550) 0:03:30.771 ******* 2026-02-20 02:49:43.417858 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:43.417868 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:43.417877 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:43.417887 | orchestrator | 2026-02-20 02:49:43.417897 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-20 02:49:43.417907 | orchestrator | Friday 20 February 2026 02:49:27 +0000 (0:00:00.538) 0:03:31.310 ******* 2026-02-20 02:49:43.417917 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:43.417926 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:43.417934 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:43.417942 | orchestrator | 2026-02-20 02:49:43.417951 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-20 02:49:43.417960 | orchestrator | Friday 20 February 2026 02:49:28 +0000 (0:00:00.321) 0:03:31.631 ******* 2026-02-20 02:49:43.417969 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:49:43.417978 | orchestrator | 2026-02-20 02:49:43.417987 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-20 02:49:43.417995 | orchestrator | Friday 20 February 2026 02:49:28 +0000 (0:00:00.766) 0:03:32.398 ******* 2026-02-20 02:49:43.418004 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:49:43.418067 | orchestrator | 2026-02-20 02:49:43.418077 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-20 02:49:43.418118 | orchestrator | Friday 20 February 2026 02:49:29 +0000 (0:00:00.161) 0:03:32.559 ******* 2026-02-20 02:49:43.418134 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-20 02:49:43.418148 | orchestrator | 2026-02-20 02:49:43.418162 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-20 02:49:43.418176 | orchestrator | Friday 20 February 2026 02:49:30 +0000 (0:00:01.031) 0:03:33.591 ******* 2026-02-20 02:49:43.418190 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:43.418204 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:43.418217 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:43.418226 | orchestrator | 2026-02-20 02:49:43.418235 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-20 02:49:43.418243 | orchestrator | Friday 20 February 2026 02:49:30 +0000 (0:00:00.323) 0:03:33.915 ******* 2026-02-20 02:49:43.418252 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:43.418260 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:43.418269 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:43.418277 | orchestrator | 2026-02-20 02:49:43.418286 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-20 02:49:43.418295 | orchestrator | Friday 20 February 2026 02:49:30 +0000 (0:00:00.550) 0:03:34.465 ******* 2026-02-20 02:49:43.418303 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:49:43.418312 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:49:43.418320 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:49:43.418329 | orchestrator | 2026-02-20 02:49:43.418338 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-20 02:49:43.418346 | orchestrator | Friday 20 February 2026 02:49:32 +0000 (0:00:01.299) 0:03:35.765 ******* 2026-02-20 02:49:43.418355 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:49:43.418363 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:49:43.418372 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:49:43.418380 | orchestrator | 2026-02-20 02:49:43.418389 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-20 02:49:43.418397 | orchestrator | Friday 20 February 2026 02:49:32 +0000 (0:00:00.782) 0:03:36.548 ******* 2026-02-20 02:49:43.418406 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:49:43.418414 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:49:43.418465 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:49:43.418474 | orchestrator | 2026-02-20 02:49:43.418482 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-20 02:49:43.418491 | orchestrator | Friday 20 February 2026 02:49:33 +0000 (0:00:00.665) 0:03:37.213 ******* 2026-02-20 02:49:43.418499 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:43.418508 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:43.418517 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:43.418525 | orchestrator | 2026-02-20 02:49:43.418534 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-20 02:49:43.418542 | orchestrator | Friday 20 February 2026 02:49:34 +0000 (0:00:00.933) 0:03:38.147 ******* 2026-02-20 02:49:43.418551 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:49:43.418560 | orchestrator | 2026-02-20 02:49:43.418568 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-20 02:49:43.418577 | orchestrator | Friday 20 February 2026 02:49:35 +0000 (0:00:01.299) 0:03:39.447 ******* 2026-02-20 02:49:43.418585 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:43.418594 | orchestrator | 2026-02-20 02:49:43.418603 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-20 02:49:43.418611 | orchestrator | Friday 20 February 2026 02:49:36 +0000 (0:00:00.703) 0:03:40.151 ******* 2026-02-20 02:49:43.418628 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-20 02:49:43.418638 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:49:43.418647 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:49:43.418664 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-20 02:49:43.418673 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-20 02:49:43.418681 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-20 02:49:43.418690 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-20 02:49:43.418699 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-20 02:49:43.418713 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-20 02:49:43.418722 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-20 02:49:43.418731 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-20 02:49:43.418739 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-20 02:49:43.418748 | orchestrator | 2026-02-20 02:49:43.418757 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-20 02:49:43.418765 | orchestrator | Friday 20 February 2026 02:49:39 +0000 (0:00:03.066) 0:03:43.217 ******* 2026-02-20 02:49:43.418774 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:49:43.418782 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:49:43.418791 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:49:43.418800 | orchestrator | 2026-02-20 02:49:43.418809 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-20 02:49:43.418818 | orchestrator | Friday 20 February 2026 02:49:40 +0000 (0:00:01.225) 0:03:44.443 ******* 2026-02-20 02:49:43.418826 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:43.418835 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:43.418844 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:43.418852 | orchestrator | 2026-02-20 02:49:43.418861 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-20 02:49:43.418870 | orchestrator | Friday 20 February 2026 02:49:41 +0000 (0:00:00.609) 0:03:45.052 ******* 2026-02-20 02:49:43.418878 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:49:43.418887 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:49:43.418896 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:49:43.418904 | orchestrator | 2026-02-20 02:49:43.418913 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-20 02:49:43.418922 | orchestrator | Friday 20 February 2026 02:49:41 +0000 (0:00:00.363) 0:03:45.415 ******* 2026-02-20 02:49:43.418930 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:49:43.418939 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:49:43.418948 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:49:43.418956 | orchestrator | 2026-02-20 02:49:43.418972 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-20 02:50:44.381796 | orchestrator | Friday 20 February 2026 02:49:43 +0000 (0:00:01.536) 0:03:46.952 ******* 2026-02-20 02:50:44.381931 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:50:44.381960 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:50:44.381972 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:50:44.381984 | orchestrator | 2026-02-20 02:50:44.381996 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-20 02:50:44.382008 | orchestrator | Friday 20 February 2026 02:49:44 +0000 (0:00:01.298) 0:03:48.251 ******* 2026-02-20 02:50:44.382097 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:50:44.382119 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:50:44.382137 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:50:44.382154 | orchestrator | 2026-02-20 02:50:44.382173 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-20 02:50:44.382195 | orchestrator | Friday 20 February 2026 02:49:45 +0000 (0:00:00.515) 0:03:48.767 ******* 2026-02-20 02:50:44.382215 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:50:44.382234 | orchestrator | 2026-02-20 02:50:44.382254 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-20 02:50:44.382274 | orchestrator | Friday 20 February 2026 02:49:45 +0000 (0:00:00.515) 0:03:49.282 ******* 2026-02-20 02:50:44.382330 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:50:44.382353 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:50:44.382374 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:50:44.382395 | orchestrator | 2026-02-20 02:50:44.382415 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-20 02:50:44.382433 | orchestrator | Friday 20 February 2026 02:49:46 +0000 (0:00:00.315) 0:03:49.598 ******* 2026-02-20 02:50:44.382446 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:50:44.382459 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:50:44.382472 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:50:44.382484 | orchestrator | 2026-02-20 02:50:44.382556 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-20 02:50:44.382572 | orchestrator | Friday 20 February 2026 02:49:46 +0000 (0:00:00.514) 0:03:50.112 ******* 2026-02-20 02:50:44.382589 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:50:44.382608 | orchestrator | 2026-02-20 02:50:44.382625 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-20 02:50:44.382643 | orchestrator | Friday 20 February 2026 02:49:47 +0000 (0:00:00.541) 0:03:50.654 ******* 2026-02-20 02:50:44.382659 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:50:44.382677 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:50:44.382695 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:50:44.382714 | orchestrator | 2026-02-20 02:50:44.382732 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-20 02:50:44.382751 | orchestrator | Friday 20 February 2026 02:49:48 +0000 (0:00:01.865) 0:03:52.519 ******* 2026-02-20 02:50:44.382771 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:50:44.382789 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:50:44.382806 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:50:44.382817 | orchestrator | 2026-02-20 02:50:44.382828 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-20 02:50:44.382839 | orchestrator | Friday 20 February 2026 02:49:50 +0000 (0:00:01.427) 0:03:53.947 ******* 2026-02-20 02:50:44.382849 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:50:44.382860 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:50:44.382871 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:50:44.382881 | orchestrator | 2026-02-20 02:50:44.382892 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-20 02:50:44.382903 | orchestrator | Friday 20 February 2026 02:49:52 +0000 (0:00:01.766) 0:03:55.714 ******* 2026-02-20 02:50:44.382913 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:50:44.382924 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:50:44.382935 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:50:44.382945 | orchestrator | 2026-02-20 02:50:44.382956 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-20 02:50:44.382984 | orchestrator | Friday 20 February 2026 02:49:54 +0000 (0:00:02.038) 0:03:57.753 ******* 2026-02-20 02:50:44.382995 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:50:44.383006 | orchestrator | 2026-02-20 02:50:44.383017 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-20 02:50:44.383028 | orchestrator | Friday 20 February 2026 02:49:54 +0000 (0:00:00.733) 0:03:58.486 ******* 2026-02-20 02:50:44.383038 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-20 02:50:44.383049 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:50:44.383061 | orchestrator | 2026-02-20 02:50:44.383073 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-20 02:50:44.383083 | orchestrator | Friday 20 February 2026 02:50:16 +0000 (0:00:21.821) 0:04:20.308 ******* 2026-02-20 02:50:44.383094 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:50:44.383105 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:50:44.383127 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:50:44.383138 | orchestrator | 2026-02-20 02:50:44.383149 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-20 02:50:44.383160 | orchestrator | Friday 20 February 2026 02:50:26 +0000 (0:00:09.342) 0:04:29.650 ******* 2026-02-20 02:50:44.383171 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:50:44.383181 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:50:44.383192 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:50:44.383205 | orchestrator | 2026-02-20 02:50:44.383223 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-20 02:50:44.383240 | orchestrator | Friday 20 February 2026 02:50:26 +0000 (0:00:00.304) 0:04:29.955 ******* 2026-02-20 02:50:44.383290 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__049c15c983eca3451d9b4b3186d777994ac032fa'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-20 02:50:44.383315 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__049c15c983eca3451d9b4b3186d777994ac032fa'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-20 02:50:44.383334 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__049c15c983eca3451d9b4b3186d777994ac032fa'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-20 02:50:44.383356 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__049c15c983eca3451d9b4b3186d777994ac032fa'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-20 02:50:44.383375 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__049c15c983eca3451d9b4b3186d777994ac032fa'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-20 02:50:44.383394 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__049c15c983eca3451d9b4b3186d777994ac032fa'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__049c15c983eca3451d9b4b3186d777994ac032fa'}])  2026-02-20 02:50:44.383413 | orchestrator | 2026-02-20 02:50:44.383431 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-20 02:50:44.383450 | orchestrator | Friday 20 February 2026 02:50:41 +0000 (0:00:14.669) 0:04:44.624 ******* 2026-02-20 02:50:44.383469 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:50:44.383488 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:50:44.383540 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:50:44.383559 | orchestrator | 2026-02-20 02:50:44.383578 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-20 02:50:44.383593 | orchestrator | Friday 20 February 2026 02:50:41 +0000 (0:00:00.336) 0:04:44.961 ******* 2026-02-20 02:50:44.383613 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:50:44.383635 | orchestrator | 2026-02-20 02:50:44.383646 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-20 02:50:44.383657 | orchestrator | Friday 20 February 2026 02:50:42 +0000 (0:00:00.703) 0:04:45.664 ******* 2026-02-20 02:50:44.383668 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:50:44.383679 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:50:44.383690 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:50:44.383701 | orchestrator | 2026-02-20 02:50:44.383712 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-20 02:50:44.383722 | orchestrator | Friday 20 February 2026 02:50:42 +0000 (0:00:00.323) 0:04:45.987 ******* 2026-02-20 02:50:44.383733 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:50:44.383744 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:50:44.383755 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:50:44.383765 | orchestrator | 2026-02-20 02:50:44.383776 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-20 02:50:44.383787 | orchestrator | Friday 20 February 2026 02:50:42 +0000 (0:00:00.315) 0:04:46.303 ******* 2026-02-20 02:50:44.383798 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 02:50:44.383810 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 02:50:44.383821 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 02:50:44.383831 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:50:44.383842 | orchestrator | 2026-02-20 02:50:44.383853 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-20 02:50:44.383864 | orchestrator | Friday 20 February 2026 02:50:43 +0000 (0:00:00.836) 0:04:47.139 ******* 2026-02-20 02:50:44.383874 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:50:44.383885 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:50:44.383896 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:50:44.383907 | orchestrator | 2026-02-20 02:50:44.383918 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-20 02:50:44.383929 | orchestrator | 2026-02-20 02:50:44.383951 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 02:51:10.239313 | orchestrator | Friday 20 February 2026 02:50:44 +0000 (0:00:00.770) 0:04:47.909 ******* 2026-02-20 02:51:10.239441 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:51:10.239462 | orchestrator | 2026-02-20 02:51:10.239477 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 02:51:10.239491 | orchestrator | Friday 20 February 2026 02:50:44 +0000 (0:00:00.511) 0:04:48.421 ******* 2026-02-20 02:51:10.239504 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:51:10.239517 | orchestrator | 2026-02-20 02:51:10.239611 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 02:51:10.239629 | orchestrator | Friday 20 February 2026 02:50:45 +0000 (0:00:00.714) 0:04:49.135 ******* 2026-02-20 02:51:10.239643 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:51:10.239658 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:51:10.239671 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:51:10.239685 | orchestrator | 2026-02-20 02:51:10.239699 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 02:51:10.239713 | orchestrator | Friday 20 February 2026 02:50:46 +0000 (0:00:00.737) 0:04:49.872 ******* 2026-02-20 02:51:10.239726 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:51:10.239741 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:51:10.239756 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:51:10.239768 | orchestrator | 2026-02-20 02:51:10.239782 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 02:51:10.239792 | orchestrator | Friday 20 February 2026 02:50:46 +0000 (0:00:00.327) 0:04:50.199 ******* 2026-02-20 02:51:10.239800 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:51:10.239835 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:51:10.239844 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:51:10.239853 | orchestrator | 2026-02-20 02:51:10.239862 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 02:51:10.239871 | orchestrator | Friday 20 February 2026 02:50:47 +0000 (0:00:00.515) 0:04:50.714 ******* 2026-02-20 02:51:10.239880 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:51:10.239889 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:51:10.239898 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:51:10.239907 | orchestrator | 2026-02-20 02:51:10.239916 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 02:51:10.239925 | orchestrator | Friday 20 February 2026 02:50:47 +0000 (0:00:00.310) 0:04:51.025 ******* 2026-02-20 02:51:10.239934 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:51:10.239943 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:51:10.239951 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:51:10.239960 | orchestrator | 2026-02-20 02:51:10.239969 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 02:51:10.239978 | orchestrator | Friday 20 February 2026 02:50:48 +0000 (0:00:00.713) 0:04:51.738 ******* 2026-02-20 02:51:10.239987 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:51:10.239996 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:51:10.240005 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:51:10.240014 | orchestrator | 2026-02-20 02:51:10.240023 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 02:51:10.240032 | orchestrator | Friday 20 February 2026 02:50:48 +0000 (0:00:00.295) 0:04:52.033 ******* 2026-02-20 02:51:10.240041 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:51:10.240050 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:51:10.240059 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:51:10.240067 | orchestrator | 2026-02-20 02:51:10.240076 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 02:51:10.240085 | orchestrator | Friday 20 February 2026 02:50:48 +0000 (0:00:00.508) 0:04:52.542 ******* 2026-02-20 02:51:10.240094 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:51:10.240103 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:51:10.240111 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:51:10.240120 | orchestrator | 2026-02-20 02:51:10.240141 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 02:51:10.240151 | orchestrator | Friday 20 February 2026 02:50:49 +0000 (0:00:00.729) 0:04:53.272 ******* 2026-02-20 02:51:10.240161 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:51:10.240170 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:51:10.240178 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:51:10.240187 | orchestrator | 2026-02-20 02:51:10.240195 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 02:51:10.240203 | orchestrator | Friday 20 February 2026 02:50:50 +0000 (0:00:00.713) 0:04:53.986 ******* 2026-02-20 02:51:10.240211 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:51:10.240219 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:51:10.240226 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:51:10.240234 | orchestrator | 2026-02-20 02:51:10.240242 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 02:51:10.240249 | orchestrator | Friday 20 February 2026 02:50:50 +0000 (0:00:00.298) 0:04:54.284 ******* 2026-02-20 02:51:10.240257 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:51:10.240265 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:51:10.240272 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:51:10.240280 | orchestrator | 2026-02-20 02:51:10.240288 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 02:51:10.240296 | orchestrator | Friday 20 February 2026 02:50:51 +0000 (0:00:00.542) 0:04:54.827 ******* 2026-02-20 02:51:10.240303 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:51:10.240311 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:51:10.240319 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:51:10.240334 | orchestrator | 2026-02-20 02:51:10.240341 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 02:51:10.240349 | orchestrator | Friday 20 February 2026 02:50:51 +0000 (0:00:00.307) 0:04:55.134 ******* 2026-02-20 02:51:10.240357 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:51:10.240365 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:51:10.240373 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:51:10.240380 | orchestrator | 2026-02-20 02:51:10.240405 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 02:51:10.240414 | orchestrator | Friday 20 February 2026 02:50:51 +0000 (0:00:00.286) 0:04:55.420 ******* 2026-02-20 02:51:10.240427 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:51:10.240439 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:51:10.240456 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:51:10.240474 | orchestrator | 2026-02-20 02:51:10.240485 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 02:51:10.240498 | orchestrator | Friday 20 February 2026 02:50:52 +0000 (0:00:00.280) 0:04:55.700 ******* 2026-02-20 02:51:10.240510 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:51:10.240522 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:51:10.240558 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:51:10.240570 | orchestrator | 2026-02-20 02:51:10.240583 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 02:51:10.240596 | orchestrator | Friday 20 February 2026 02:50:52 +0000 (0:00:00.518) 0:04:56.219 ******* 2026-02-20 02:51:10.240610 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:51:10.240623 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:51:10.240635 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:51:10.240649 | orchestrator | 2026-02-20 02:51:10.240662 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 02:51:10.240675 | orchestrator | Friday 20 February 2026 02:50:53 +0000 (0:00:00.332) 0:04:56.552 ******* 2026-02-20 02:51:10.240687 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:51:10.240700 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:51:10.240714 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:51:10.240728 | orchestrator | 2026-02-20 02:51:10.240737 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 02:51:10.240745 | orchestrator | Friday 20 February 2026 02:50:53 +0000 (0:00:00.331) 0:04:56.883 ******* 2026-02-20 02:51:10.240755 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:51:10.240769 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:51:10.240782 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:51:10.240794 | orchestrator | 2026-02-20 02:51:10.240811 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 02:51:10.240827 | orchestrator | Friday 20 February 2026 02:50:53 +0000 (0:00:00.332) 0:04:57.216 ******* 2026-02-20 02:51:10.240839 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:51:10.240858 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:51:10.240872 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:51:10.240884 | orchestrator | 2026-02-20 02:51:10.240896 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-20 02:51:10.240908 | orchestrator | Friday 20 February 2026 02:50:54 +0000 (0:00:00.765) 0:04:57.981 ******* 2026-02-20 02:51:10.240921 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 02:51:10.240934 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 02:51:10.240948 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 02:51:10.240961 | orchestrator | 2026-02-20 02:51:10.240973 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-20 02:51:10.240986 | orchestrator | Friday 20 February 2026 02:50:55 +0000 (0:00:00.627) 0:04:58.609 ******* 2026-02-20 02:51:10.240999 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:51:10.241033 | orchestrator | 2026-02-20 02:51:10.241049 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-20 02:51:10.241062 | orchestrator | Friday 20 February 2026 02:50:55 +0000 (0:00:00.505) 0:04:59.115 ******* 2026-02-20 02:51:10.241081 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:51:10.241097 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:51:10.241119 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:51:10.241132 | orchestrator | 2026-02-20 02:51:10.241145 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-20 02:51:10.241161 | orchestrator | Friday 20 February 2026 02:50:56 +0000 (0:00:00.951) 0:05:00.067 ******* 2026-02-20 02:51:10.241179 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:51:10.241191 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:51:10.241214 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:51:10.241229 | orchestrator | 2026-02-20 02:51:10.241242 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-20 02:51:10.241255 | orchestrator | Friday 20 February 2026 02:50:56 +0000 (0:00:00.310) 0:05:00.377 ******* 2026-02-20 02:51:10.241269 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-20 02:51:10.241283 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-20 02:51:10.241296 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-20 02:51:10.241309 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-20 02:51:10.241322 | orchestrator | 2026-02-20 02:51:10.241336 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-20 02:51:10.241349 | orchestrator | Friday 20 February 2026 02:51:07 +0000 (0:00:10.505) 0:05:10.883 ******* 2026-02-20 02:51:10.241362 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:51:10.241376 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:51:10.241390 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:51:10.241402 | orchestrator | 2026-02-20 02:51:10.241415 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-20 02:51:10.241428 | orchestrator | Friday 20 February 2026 02:51:07 +0000 (0:00:00.341) 0:05:11.224 ******* 2026-02-20 02:51:10.241447 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-20 02:51:10.241466 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-20 02:51:10.241483 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-20 02:51:10.241496 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-20 02:51:10.241510 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:51:10.241523 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:51:10.241556 | orchestrator | 2026-02-20 02:51:10.241571 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-20 02:51:10.241595 | orchestrator | Friday 20 February 2026 02:51:10 +0000 (0:00:02.541) 0:05:13.766 ******* 2026-02-20 02:52:10.252803 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-20 02:52:10.252922 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-20 02:52:10.252949 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-20 02:52:10.252989 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-20 02:52:10.253002 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-20 02:52:10.253024 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-20 02:52:10.253036 | orchestrator | 2026-02-20 02:52:10.253049 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-20 02:52:10.253061 | orchestrator | Friday 20 February 2026 02:51:11 +0000 (0:00:01.290) 0:05:15.056 ******* 2026-02-20 02:52:10.253072 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:52:10.253083 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:52:10.253094 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:52:10.253105 | orchestrator | 2026-02-20 02:52:10.253116 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-20 02:52:10.253127 | orchestrator | Friday 20 February 2026 02:51:12 +0000 (0:00:00.705) 0:05:15.761 ******* 2026-02-20 02:52:10.253162 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:52:10.253182 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:52:10.253199 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:52:10.253217 | orchestrator | 2026-02-20 02:52:10.253236 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-20 02:52:10.253255 | orchestrator | Friday 20 February 2026 02:51:12 +0000 (0:00:00.299) 0:05:16.061 ******* 2026-02-20 02:52:10.253275 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:52:10.253294 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:52:10.253310 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:52:10.253321 | orchestrator | 2026-02-20 02:52:10.253333 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-20 02:52:10.253346 | orchestrator | Friday 20 February 2026 02:51:13 +0000 (0:00:00.520) 0:05:16.581 ******* 2026-02-20 02:52:10.253359 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:52:10.253371 | orchestrator | 2026-02-20 02:52:10.253384 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-20 02:52:10.253396 | orchestrator | Friday 20 February 2026 02:51:13 +0000 (0:00:00.502) 0:05:17.084 ******* 2026-02-20 02:52:10.253408 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:52:10.253421 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:52:10.253433 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:52:10.253444 | orchestrator | 2026-02-20 02:52:10.253455 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-20 02:52:10.253465 | orchestrator | Friday 20 February 2026 02:51:13 +0000 (0:00:00.311) 0:05:17.396 ******* 2026-02-20 02:52:10.253476 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:52:10.253487 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:52:10.253497 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:52:10.253508 | orchestrator | 2026-02-20 02:52:10.253518 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-20 02:52:10.253529 | orchestrator | Friday 20 February 2026 02:51:14 +0000 (0:00:00.566) 0:05:17.962 ******* 2026-02-20 02:52:10.253544 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:52:10.253563 | orchestrator | 2026-02-20 02:52:10.253582 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-20 02:52:10.253600 | orchestrator | Friday 20 February 2026 02:51:14 +0000 (0:00:00.523) 0:05:18.485 ******* 2026-02-20 02:52:10.253683 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:52:10.253704 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:52:10.253723 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:52:10.253735 | orchestrator | 2026-02-20 02:52:10.253746 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-20 02:52:10.253763 | orchestrator | Friday 20 February 2026 02:51:16 +0000 (0:00:01.224) 0:05:19.710 ******* 2026-02-20 02:52:10.253790 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:52:10.253827 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:52:10.253846 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:52:10.253862 | orchestrator | 2026-02-20 02:52:10.253879 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-20 02:52:10.253899 | orchestrator | Friday 20 February 2026 02:51:17 +0000 (0:00:01.380) 0:05:21.090 ******* 2026-02-20 02:52:10.253918 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:52:10.253935 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:52:10.253953 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:52:10.253970 | orchestrator | 2026-02-20 02:52:10.253988 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-20 02:52:10.254008 | orchestrator | Friday 20 February 2026 02:51:19 +0000 (0:00:01.793) 0:05:22.883 ******* 2026-02-20 02:52:10.254102 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:52:10.254115 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:52:10.254138 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:52:10.254149 | orchestrator | 2026-02-20 02:52:10.254160 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-20 02:52:10.254171 | orchestrator | Friday 20 February 2026 02:51:21 +0000 (0:00:01.997) 0:05:24.881 ******* 2026-02-20 02:52:10.254181 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:52:10.254192 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:52:10.254203 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-20 02:52:10.254213 | orchestrator | 2026-02-20 02:52:10.254224 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-20 02:52:10.254235 | orchestrator | Friday 20 February 2026 02:51:21 +0000 (0:00:00.608) 0:05:25.489 ******* 2026-02-20 02:52:10.254245 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-20 02:52:10.254256 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-20 02:52:10.254287 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-20 02:52:10.254299 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-20 02:52:10.254310 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-02-20 02:52:10.254321 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-20 02:52:10.254332 | orchestrator | 2026-02-20 02:52:10.254343 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-20 02:52:10.254353 | orchestrator | Friday 20 February 2026 02:51:52 +0000 (0:00:30.152) 0:05:55.642 ******* 2026-02-20 02:52:10.254364 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-20 02:52:10.254375 | orchestrator | 2026-02-20 02:52:10.254386 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-20 02:52:10.254396 | orchestrator | Friday 20 February 2026 02:51:53 +0000 (0:00:01.313) 0:05:56.956 ******* 2026-02-20 02:52:10.254407 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:52:10.254418 | orchestrator | 2026-02-20 02:52:10.254429 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-20 02:52:10.254439 | orchestrator | Friday 20 February 2026 02:51:53 +0000 (0:00:00.325) 0:05:57.282 ******* 2026-02-20 02:52:10.254450 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:52:10.254461 | orchestrator | 2026-02-20 02:52:10.254471 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-20 02:52:10.254482 | orchestrator | Friday 20 February 2026 02:51:53 +0000 (0:00:00.164) 0:05:57.446 ******* 2026-02-20 02:52:10.254493 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-20 02:52:10.254504 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-20 02:52:10.254514 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-20 02:52:10.254525 | orchestrator | 2026-02-20 02:52:10.254535 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-20 02:52:10.254546 | orchestrator | Friday 20 February 2026 02:52:00 +0000 (0:00:06.497) 0:06:03.944 ******* 2026-02-20 02:52:10.254557 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-20 02:52:10.254567 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-20 02:52:10.254578 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-20 02:52:10.254588 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-20 02:52:10.254599 | orchestrator | 2026-02-20 02:52:10.254628 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-20 02:52:10.254639 | orchestrator | Friday 20 February 2026 02:52:05 +0000 (0:00:05.051) 0:06:08.995 ******* 2026-02-20 02:52:10.254656 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:52:10.254667 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:52:10.254678 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:52:10.254689 | orchestrator | 2026-02-20 02:52:10.254700 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-20 02:52:10.254710 | orchestrator | Friday 20 February 2026 02:52:06 +0000 (0:00:00.684) 0:06:09.680 ******* 2026-02-20 02:52:10.254721 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:52:10.254732 | orchestrator | 2026-02-20 02:52:10.254743 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-20 02:52:10.254754 | orchestrator | Friday 20 February 2026 02:52:06 +0000 (0:00:00.545) 0:06:10.226 ******* 2026-02-20 02:52:10.254764 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:52:10.254775 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:52:10.254786 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:52:10.254797 | orchestrator | 2026-02-20 02:52:10.254814 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-20 02:52:10.254825 | orchestrator | Friday 20 February 2026 02:52:07 +0000 (0:00:00.499) 0:06:10.725 ******* 2026-02-20 02:52:10.254836 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:52:10.254847 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:52:10.254858 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:52:10.254868 | orchestrator | 2026-02-20 02:52:10.254879 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-20 02:52:10.254890 | orchestrator | Friday 20 February 2026 02:52:08 +0000 (0:00:01.149) 0:06:11.875 ******* 2026-02-20 02:52:10.254901 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 02:52:10.254912 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 02:52:10.254923 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 02:52:10.254934 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:52:10.254945 | orchestrator | 2026-02-20 02:52:10.254956 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-20 02:52:10.254977 | orchestrator | Friday 20 February 2026 02:52:08 +0000 (0:00:00.647) 0:06:12.523 ******* 2026-02-20 02:52:10.254998 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:52:10.255009 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:52:10.255020 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:52:10.255031 | orchestrator | 2026-02-20 02:52:10.255043 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-20 02:52:10.255054 | orchestrator | 2026-02-20 02:52:10.255065 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 02:52:10.255076 | orchestrator | Friday 20 February 2026 02:52:09 +0000 (0:00:00.523) 0:06:13.047 ******* 2026-02-20 02:52:10.255087 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:52:10.255099 | orchestrator | 2026-02-20 02:52:10.255110 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 02:52:10.255127 | orchestrator | Friday 20 February 2026 02:52:10 +0000 (0:00:00.741) 0:06:13.788 ******* 2026-02-20 02:52:25.642244 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:52:25.642324 | orchestrator | 2026-02-20 02:52:25.642331 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 02:52:25.642336 | orchestrator | Friday 20 February 2026 02:52:10 +0000 (0:00:00.711) 0:06:14.500 ******* 2026-02-20 02:52:25.642341 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:52:25.642345 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:52:25.642349 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:52:25.642353 | orchestrator | 2026-02-20 02:52:25.642358 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 02:52:25.642376 | orchestrator | Friday 20 February 2026 02:52:11 +0000 (0:00:00.312) 0:06:14.812 ******* 2026-02-20 02:52:25.642380 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:52:25.642385 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:52:25.642389 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:52:25.642393 | orchestrator | 2026-02-20 02:52:25.642397 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 02:52:25.642401 | orchestrator | Friday 20 February 2026 02:52:11 +0000 (0:00:00.685) 0:06:15.498 ******* 2026-02-20 02:52:25.642405 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:52:25.642409 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:52:25.642413 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:52:25.642417 | orchestrator | 2026-02-20 02:52:25.642421 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 02:52:25.642425 | orchestrator | Friday 20 February 2026 02:52:12 +0000 (0:00:00.693) 0:06:16.191 ******* 2026-02-20 02:52:25.642428 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:52:25.642432 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:52:25.642436 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:52:25.642440 | orchestrator | 2026-02-20 02:52:25.642444 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 02:52:25.642448 | orchestrator | Friday 20 February 2026 02:52:13 +0000 (0:00:00.855) 0:06:17.047 ******* 2026-02-20 02:52:25.642452 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:52:25.642458 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:52:25.642464 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:52:25.642470 | orchestrator | 2026-02-20 02:52:25.642476 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 02:52:25.642487 | orchestrator | Friday 20 February 2026 02:52:13 +0000 (0:00:00.310) 0:06:17.357 ******* 2026-02-20 02:52:25.642494 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:52:25.642500 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:52:25.642506 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:52:25.642513 | orchestrator | 2026-02-20 02:52:25.642519 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 02:52:25.642524 | orchestrator | Friday 20 February 2026 02:52:14 +0000 (0:00:00.304) 0:06:17.662 ******* 2026-02-20 02:52:25.642530 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:52:25.642537 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:52:25.642544 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:52:25.642550 | orchestrator | 2026-02-20 02:52:25.642556 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 02:52:25.642562 | orchestrator | Friday 20 February 2026 02:52:14 +0000 (0:00:00.317) 0:06:17.980 ******* 2026-02-20 02:52:25.642568 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:52:25.642575 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:52:25.642581 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:52:25.642587 | orchestrator | 2026-02-20 02:52:25.642593 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 02:52:25.642599 | orchestrator | Friday 20 February 2026 02:52:15 +0000 (0:00:00.878) 0:06:18.858 ******* 2026-02-20 02:52:25.642605 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:52:25.642611 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:52:25.642618 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:52:25.642622 | orchestrator | 2026-02-20 02:52:25.642626 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 02:52:25.642700 | orchestrator | Friday 20 February 2026 02:52:16 +0000 (0:00:00.706) 0:06:19.565 ******* 2026-02-20 02:52:25.642705 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:52:25.642709 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:52:25.642713 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:52:25.642717 | orchestrator | 2026-02-20 02:52:25.642720 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 02:52:25.642724 | orchestrator | Friday 20 February 2026 02:52:16 +0000 (0:00:00.306) 0:06:19.872 ******* 2026-02-20 02:52:25.642734 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:52:25.642738 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:52:25.642742 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:52:25.642746 | orchestrator | 2026-02-20 02:52:25.642750 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 02:52:25.642754 | orchestrator | Friday 20 February 2026 02:52:16 +0000 (0:00:00.298) 0:06:20.171 ******* 2026-02-20 02:52:25.642758 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:52:25.642761 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:52:25.642765 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:52:25.642769 | orchestrator | 2026-02-20 02:52:25.642774 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 02:52:25.642778 | orchestrator | Friday 20 February 2026 02:52:17 +0000 (0:00:00.550) 0:06:20.721 ******* 2026-02-20 02:52:25.642785 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:52:25.642792 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:52:25.642798 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:52:25.642805 | orchestrator | 2026-02-20 02:52:25.642812 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 02:52:25.642818 | orchestrator | Friday 20 February 2026 02:52:17 +0000 (0:00:00.333) 0:06:21.054 ******* 2026-02-20 02:52:25.642825 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:52:25.642831 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:52:25.642838 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:52:25.642844 | orchestrator | 2026-02-20 02:52:25.642851 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 02:52:25.642858 | orchestrator | Friday 20 February 2026 02:52:17 +0000 (0:00:00.324) 0:06:21.379 ******* 2026-02-20 02:52:25.642865 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:52:25.642887 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:52:25.642895 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:52:25.642903 | orchestrator | 2026-02-20 02:52:25.642911 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 02:52:25.642915 | orchestrator | Friday 20 February 2026 02:52:18 +0000 (0:00:00.290) 0:06:21.669 ******* 2026-02-20 02:52:25.642920 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:52:25.642924 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:52:25.642929 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:52:25.642934 | orchestrator | 2026-02-20 02:52:25.642938 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 02:52:25.642941 | orchestrator | Friday 20 February 2026 02:52:18 +0000 (0:00:00.516) 0:06:22.186 ******* 2026-02-20 02:52:25.642945 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:52:25.642949 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:52:25.642953 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:52:25.642957 | orchestrator | 2026-02-20 02:52:25.642961 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 02:52:25.642965 | orchestrator | Friday 20 February 2026 02:52:18 +0000 (0:00:00.291) 0:06:22.477 ******* 2026-02-20 02:52:25.642969 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:52:25.642972 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:52:25.642976 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:52:25.642980 | orchestrator | 2026-02-20 02:52:25.642984 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 02:52:25.642988 | orchestrator | Friday 20 February 2026 02:52:19 +0000 (0:00:00.333) 0:06:22.811 ******* 2026-02-20 02:52:25.642992 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:52:25.642996 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:52:25.642999 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:52:25.643003 | orchestrator | 2026-02-20 02:52:25.643007 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-20 02:52:25.643011 | orchestrator | Friday 20 February 2026 02:52:19 +0000 (0:00:00.719) 0:06:23.531 ******* 2026-02-20 02:52:25.643015 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:52:25.643019 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:52:25.643023 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:52:25.643031 | orchestrator | 2026-02-20 02:52:25.643035 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-20 02:52:25.643039 | orchestrator | Friday 20 February 2026 02:52:20 +0000 (0:00:00.345) 0:06:23.877 ******* 2026-02-20 02:52:25.643043 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 02:52:25.643048 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 02:52:25.643052 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 02:52:25.643056 | orchestrator | 2026-02-20 02:52:25.643060 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-20 02:52:25.643064 | orchestrator | Friday 20 February 2026 02:52:20 +0000 (0:00:00.616) 0:06:24.493 ******* 2026-02-20 02:52:25.643068 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:52:25.643072 | orchestrator | 2026-02-20 02:52:25.643076 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-20 02:52:25.643080 | orchestrator | Friday 20 February 2026 02:52:21 +0000 (0:00:00.503) 0:06:24.996 ******* 2026-02-20 02:52:25.643084 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:52:25.643088 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:52:25.643092 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:52:25.643096 | orchestrator | 2026-02-20 02:52:25.643099 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-20 02:52:25.643103 | orchestrator | Friday 20 February 2026 02:52:21 +0000 (0:00:00.531) 0:06:25.528 ******* 2026-02-20 02:52:25.643107 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:52:25.643111 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:52:25.643115 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:52:25.643119 | orchestrator | 2026-02-20 02:52:25.643126 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-20 02:52:25.643130 | orchestrator | Friday 20 February 2026 02:52:22 +0000 (0:00:00.322) 0:06:25.850 ******* 2026-02-20 02:52:25.643134 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:52:25.643138 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:52:25.643142 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:52:25.643146 | orchestrator | 2026-02-20 02:52:25.643150 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-20 02:52:25.643154 | orchestrator | Friday 20 February 2026 02:52:22 +0000 (0:00:00.620) 0:06:26.471 ******* 2026-02-20 02:52:25.643158 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:52:25.643161 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:52:25.643165 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:52:25.643169 | orchestrator | 2026-02-20 02:52:25.643173 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-20 02:52:25.643177 | orchestrator | Friday 20 February 2026 02:52:23 +0000 (0:00:00.542) 0:06:27.013 ******* 2026-02-20 02:52:25.643181 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-20 02:52:25.643186 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-20 02:52:25.643190 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-20 02:52:25.643194 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-20 02:52:25.643198 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-20 02:52:25.643202 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-20 02:52:25.643206 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-20 02:52:25.643212 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-20 02:53:30.454007 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-20 02:53:30.454193 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-20 02:53:30.454216 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-20 02:53:30.454236 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-20 02:53:30.454257 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-20 02:53:30.454274 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-20 02:53:30.454293 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-20 02:53:30.454313 | orchestrator | 2026-02-20 02:53:30.454334 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-20 02:53:30.454355 | orchestrator | Friday 20 February 2026 02:52:25 +0000 (0:00:02.159) 0:06:29.173 ******* 2026-02-20 02:53:30.454375 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:53:30.454398 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:53:30.454418 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:53:30.454437 | orchestrator | 2026-02-20 02:53:30.454457 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-20 02:53:30.454477 | orchestrator | Friday 20 February 2026 02:52:25 +0000 (0:00:00.313) 0:06:29.486 ******* 2026-02-20 02:53:30.454498 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:53:30.454520 | orchestrator | 2026-02-20 02:53:30.454541 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-20 02:53:30.454560 | orchestrator | Friday 20 February 2026 02:52:26 +0000 (0:00:00.733) 0:06:30.220 ******* 2026-02-20 02:53:30.454581 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-20 02:53:30.454602 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-20 02:53:30.454622 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-20 02:53:30.454643 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-20 02:53:30.454665 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-20 02:53:30.454685 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-20 02:53:30.454705 | orchestrator | 2026-02-20 02:53:30.454725 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-20 02:53:30.454772 | orchestrator | Friday 20 February 2026 02:52:27 +0000 (0:00:01.056) 0:06:31.276 ******* 2026-02-20 02:53:30.454794 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:53:30.454814 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-20 02:53:30.454834 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 02:53:30.454854 | orchestrator | 2026-02-20 02:53:30.454874 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-20 02:53:30.454893 | orchestrator | Friday 20 February 2026 02:52:30 +0000 (0:00:02.281) 0:06:33.558 ******* 2026-02-20 02:53:30.454913 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-20 02:53:30.454932 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-20 02:53:30.454953 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:53:30.454972 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-20 02:53:30.455093 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-20 02:53:30.455119 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:53:30.455140 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-20 02:53:30.455169 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-20 02:53:30.455181 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:53:30.455192 | orchestrator | 2026-02-20 02:53:30.455204 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-20 02:53:30.455239 | orchestrator | Friday 20 February 2026 02:52:31 +0000 (0:00:01.209) 0:06:34.767 ******* 2026-02-20 02:53:30.455250 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 02:53:30.455261 | orchestrator | 2026-02-20 02:53:30.455272 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-20 02:53:30.455282 | orchestrator | Friday 20 February 2026 02:52:33 +0000 (0:00:02.255) 0:06:37.022 ******* 2026-02-20 02:53:30.455293 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:53:30.455304 | orchestrator | 2026-02-20 02:53:30.455314 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-20 02:53:30.455325 | orchestrator | Friday 20 February 2026 02:52:34 +0000 (0:00:00.760) 0:06:37.783 ******* 2026-02-20 02:53:30.455337 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'}) 2026-02-20 02:53:30.455349 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'}) 2026-02-20 02:53:30.455359 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'}) 2026-02-20 02:53:30.455370 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'}) 2026-02-20 02:53:30.455403 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'}) 2026-02-20 02:53:30.455414 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'}) 2026-02-20 02:53:30.455425 | orchestrator | 2026-02-20 02:53:30.455436 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-20 02:53:30.455446 | orchestrator | Friday 20 February 2026 02:53:13 +0000 (0:00:39.348) 0:07:17.132 ******* 2026-02-20 02:53:30.455457 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:53:30.455468 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:53:30.455478 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:53:30.455489 | orchestrator | 2026-02-20 02:53:30.455499 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-20 02:53:30.455510 | orchestrator | Friday 20 February 2026 02:53:13 +0000 (0:00:00.301) 0:07:17.434 ******* 2026-02-20 02:53:30.455520 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:53:30.455531 | orchestrator | 2026-02-20 02:53:30.455542 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-20 02:53:30.455552 | orchestrator | Friday 20 February 2026 02:53:14 +0000 (0:00:00.745) 0:07:18.180 ******* 2026-02-20 02:53:30.455563 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:53:30.455574 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:53:30.455585 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:53:30.455596 | orchestrator | 2026-02-20 02:53:30.455607 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-20 02:53:30.455618 | orchestrator | Friday 20 February 2026 02:53:15 +0000 (0:00:00.653) 0:07:18.834 ******* 2026-02-20 02:53:30.455628 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:53:30.455639 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:53:30.455649 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:53:30.455660 | orchestrator | 2026-02-20 02:53:30.455685 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-20 02:53:30.455696 | orchestrator | Friday 20 February 2026 02:53:17 +0000 (0:00:02.647) 0:07:21.481 ******* 2026-02-20 02:53:30.455707 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:53:30.455726 | orchestrator | 2026-02-20 02:53:30.455736 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-20 02:53:30.455779 | orchestrator | Friday 20 February 2026 02:53:18 +0000 (0:00:00.712) 0:07:22.194 ******* 2026-02-20 02:53:30.455828 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:53:30.455857 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:53:30.455877 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:53:30.455896 | orchestrator | 2026-02-20 02:53:30.455914 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-20 02:53:30.455932 | orchestrator | Friday 20 February 2026 02:53:19 +0000 (0:00:01.185) 0:07:23.379 ******* 2026-02-20 02:53:30.455950 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:53:30.455967 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:53:30.455987 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:53:30.456006 | orchestrator | 2026-02-20 02:53:30.456025 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-20 02:53:30.456046 | orchestrator | Friday 20 February 2026 02:53:20 +0000 (0:00:01.154) 0:07:24.534 ******* 2026-02-20 02:53:30.456067 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:53:30.456086 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:53:30.456103 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:53:30.456114 | orchestrator | 2026-02-20 02:53:30.456124 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-20 02:53:30.456135 | orchestrator | Friday 20 February 2026 02:53:22 +0000 (0:00:01.898) 0:07:26.432 ******* 2026-02-20 02:53:30.456145 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:53:30.456166 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:53:30.456176 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:53:30.456187 | orchestrator | 2026-02-20 02:53:30.456198 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-20 02:53:30.456208 | orchestrator | Friday 20 February 2026 02:53:23 +0000 (0:00:00.329) 0:07:26.762 ******* 2026-02-20 02:53:30.456219 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:53:30.456230 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:53:30.456240 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:53:30.456251 | orchestrator | 2026-02-20 02:53:30.456261 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-20 02:53:30.456272 | orchestrator | Friday 20 February 2026 02:53:23 +0000 (0:00:00.318) 0:07:27.081 ******* 2026-02-20 02:53:30.456283 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-20 02:53:30.456293 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-02-20 02:53:30.456304 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-20 02:53:30.456314 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-20 02:53:30.456325 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-20 02:53:30.456335 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-02-20 02:53:30.456346 | orchestrator | 2026-02-20 02:53:30.456357 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-20 02:53:30.456367 | orchestrator | Friday 20 February 2026 02:53:24 +0000 (0:00:00.988) 0:07:28.070 ******* 2026-02-20 02:53:30.456378 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-20 02:53:30.456389 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-20 02:53:30.456399 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-20 02:53:30.456410 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-20 02:53:30.456420 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-20 02:53:30.456431 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-20 02:53:30.456441 | orchestrator | 2026-02-20 02:53:30.456452 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-20 02:53:30.456463 | orchestrator | Friday 20 February 2026 02:53:26 +0000 (0:00:02.354) 0:07:30.424 ******* 2026-02-20 02:53:30.456473 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-20 02:53:30.456496 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-20 02:54:00.009521 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-20 02:54:00.009637 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-20 02:54:00.009653 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-20 02:54:00.009664 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-20 02:54:00.009676 | orchestrator | 2026-02-20 02:54:00.009689 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-20 02:54:00.009701 | orchestrator | Friday 20 February 2026 02:53:30 +0000 (0:00:03.561) 0:07:33.986 ******* 2026-02-20 02:54:00.009713 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.009724 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:00.009735 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-20 02:54:00.009746 | orchestrator | 2026-02-20 02:54:00.009757 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-20 02:54:00.009768 | orchestrator | Friday 20 February 2026 02:53:32 +0000 (0:00:02.222) 0:07:36.209 ******* 2026-02-20 02:54:00.009779 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.009790 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:00.009868 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-20 02:54:00.009881 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-20 02:54:00.009892 | orchestrator | 2026-02-20 02:54:00.009904 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-20 02:54:00.009915 | orchestrator | Friday 20 February 2026 02:53:45 +0000 (0:00:12.407) 0:07:48.616 ******* 2026-02-20 02:54:00.009927 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.009938 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:00.009949 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:00.009960 | orchestrator | 2026-02-20 02:54:00.009971 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-20 02:54:00.009983 | orchestrator | Friday 20 February 2026 02:53:46 +0000 (0:00:01.113) 0:07:49.730 ******* 2026-02-20 02:54:00.009994 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.010005 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:00.010093 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:00.010107 | orchestrator | 2026-02-20 02:54:00.010120 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-20 02:54:00.010133 | orchestrator | Friday 20 February 2026 02:53:46 +0000 (0:00:00.307) 0:07:50.037 ******* 2026-02-20 02:54:00.010146 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:54:00.010159 | orchestrator | 2026-02-20 02:54:00.010173 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-20 02:54:00.010186 | orchestrator | Friday 20 February 2026 02:53:47 +0000 (0:00:00.815) 0:07:50.852 ******* 2026-02-20 02:54:00.010197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:54:00.010209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:54:00.010220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:54:00.010231 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.010242 | orchestrator | 2026-02-20 02:54:00.010253 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-20 02:54:00.010265 | orchestrator | Friday 20 February 2026 02:53:47 +0000 (0:00:00.394) 0:07:51.247 ******* 2026-02-20 02:54:00.010275 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.010287 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:00.010297 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:00.010308 | orchestrator | 2026-02-20 02:54:00.010320 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-20 02:54:00.010331 | orchestrator | Friday 20 February 2026 02:53:48 +0000 (0:00:00.321) 0:07:51.569 ******* 2026-02-20 02:54:00.010358 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.010369 | orchestrator | 2026-02-20 02:54:00.010403 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-20 02:54:00.010414 | orchestrator | Friday 20 February 2026 02:53:48 +0000 (0:00:00.247) 0:07:51.817 ******* 2026-02-20 02:54:00.010425 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.010436 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:00.010447 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:00.010458 | orchestrator | 2026-02-20 02:54:00.010469 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-20 02:54:00.010480 | orchestrator | Friday 20 February 2026 02:53:48 +0000 (0:00:00.527) 0:07:52.344 ******* 2026-02-20 02:54:00.010491 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.010502 | orchestrator | 2026-02-20 02:54:00.010513 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-20 02:54:00.010524 | orchestrator | Friday 20 February 2026 02:53:49 +0000 (0:00:00.235) 0:07:52.580 ******* 2026-02-20 02:54:00.010534 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.010545 | orchestrator | 2026-02-20 02:54:00.010556 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-20 02:54:00.010567 | orchestrator | Friday 20 February 2026 02:53:49 +0000 (0:00:00.234) 0:07:52.814 ******* 2026-02-20 02:54:00.010578 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.010589 | orchestrator | 2026-02-20 02:54:00.010600 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-20 02:54:00.010611 | orchestrator | Friday 20 February 2026 02:53:49 +0000 (0:00:00.130) 0:07:52.945 ******* 2026-02-20 02:54:00.010622 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.010633 | orchestrator | 2026-02-20 02:54:00.010644 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-20 02:54:00.010655 | orchestrator | Friday 20 February 2026 02:53:49 +0000 (0:00:00.230) 0:07:53.175 ******* 2026-02-20 02:54:00.010666 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.010677 | orchestrator | 2026-02-20 02:54:00.010688 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-20 02:54:00.010699 | orchestrator | Friday 20 February 2026 02:53:49 +0000 (0:00:00.230) 0:07:53.406 ******* 2026-02-20 02:54:00.010728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:54:00.010740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:54:00.010751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:54:00.010762 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.010773 | orchestrator | 2026-02-20 02:54:00.010784 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-20 02:54:00.010819 | orchestrator | Friday 20 February 2026 02:53:50 +0000 (0:00:00.394) 0:07:53.801 ******* 2026-02-20 02:54:00.010831 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.010842 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:00.010853 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:00.010863 | orchestrator | 2026-02-20 02:54:00.010875 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-20 02:54:00.010885 | orchestrator | Friday 20 February 2026 02:53:50 +0000 (0:00:00.301) 0:07:54.103 ******* 2026-02-20 02:54:00.010896 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.010907 | orchestrator | 2026-02-20 02:54:00.010918 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-20 02:54:00.010929 | orchestrator | Friday 20 February 2026 02:53:50 +0000 (0:00:00.237) 0:07:54.340 ******* 2026-02-20 02:54:00.010940 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.010951 | orchestrator | 2026-02-20 02:54:00.010962 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-20 02:54:00.010973 | orchestrator | 2026-02-20 02:54:00.010984 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 02:54:00.010995 | orchestrator | Friday 20 February 2026 02:53:51 +0000 (0:00:01.166) 0:07:55.507 ******* 2026-02-20 02:54:00.011015 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:54:00.011026 | orchestrator | 2026-02-20 02:54:00.011038 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 02:54:00.011049 | orchestrator | Friday 20 February 2026 02:53:53 +0000 (0:00:01.170) 0:07:56.677 ******* 2026-02-20 02:54:00.011060 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:54:00.011071 | orchestrator | 2026-02-20 02:54:00.011082 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 02:54:00.011093 | orchestrator | Friday 20 February 2026 02:53:54 +0000 (0:00:01.184) 0:07:57.861 ******* 2026-02-20 02:54:00.011103 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.011114 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:00.011126 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:00.011137 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:54:00.011148 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:54:00.011159 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:54:00.011170 | orchestrator | 2026-02-20 02:54:00.011181 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 02:54:00.011192 | orchestrator | Friday 20 February 2026 02:53:55 +0000 (0:00:01.239) 0:07:59.101 ******* 2026-02-20 02:54:00.011202 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:54:00.011213 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:54:00.011224 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:00.011235 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:00.011246 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:54:00.011257 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:00.011268 | orchestrator | 2026-02-20 02:54:00.011279 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 02:54:00.011290 | orchestrator | Friday 20 February 2026 02:53:56 +0000 (0:00:00.797) 0:07:59.898 ******* 2026-02-20 02:54:00.011301 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:54:00.011317 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:54:00.011328 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:00.011339 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:00.011350 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:00.011361 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:54:00.011372 | orchestrator | 2026-02-20 02:54:00.011382 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 02:54:00.011394 | orchestrator | Friday 20 February 2026 02:53:57 +0000 (0:00:00.856) 0:08:00.755 ******* 2026-02-20 02:54:00.011404 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:54:00.011415 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:54:00.011426 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:00.011437 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:00.011448 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:54:00.011459 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:00.011470 | orchestrator | 2026-02-20 02:54:00.011481 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 02:54:00.011492 | orchestrator | Friday 20 February 2026 02:53:57 +0000 (0:00:00.735) 0:08:01.490 ******* 2026-02-20 02:54:00.011503 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.011514 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:00.011525 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:00.011536 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:54:00.011546 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:54:00.011557 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:54:00.011568 | orchestrator | 2026-02-20 02:54:00.011579 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 02:54:00.011590 | orchestrator | Friday 20 February 2026 02:53:59 +0000 (0:00:01.290) 0:08:02.780 ******* 2026-02-20 02:54:00.011601 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:00.011618 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:00.011629 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:00.011640 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:54:00.011651 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:54:00.011662 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:54:00.011672 | orchestrator | 2026-02-20 02:54:00.011684 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 02:54:00.011695 | orchestrator | Friday 20 February 2026 02:53:59 +0000 (0:00:00.592) 0:08:03.372 ******* 2026-02-20 02:54:00.011712 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:30.219559 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:30.219678 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:30.219693 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:54:30.219705 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:54:30.219716 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:54:30.219728 | orchestrator | 2026-02-20 02:54:30.219740 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 02:54:30.219753 | orchestrator | Friday 20 February 2026 02:54:00 +0000 (0:00:00.825) 0:08:04.198 ******* 2026-02-20 02:54:30.219764 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:30.219776 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:30.219787 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:30.219798 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:54:30.219808 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:54:30.219819 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:54:30.219844 | orchestrator | 2026-02-20 02:54:30.219918 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 02:54:30.219930 | orchestrator | Friday 20 February 2026 02:54:01 +0000 (0:00:01.074) 0:08:05.272 ******* 2026-02-20 02:54:30.219941 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:30.219951 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:30.219962 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:30.219973 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:54:30.219983 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:54:30.219994 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:54:30.220006 | orchestrator | 2026-02-20 02:54:30.220017 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 02:54:30.220028 | orchestrator | Friday 20 February 2026 02:54:02 +0000 (0:00:01.244) 0:08:06.516 ******* 2026-02-20 02:54:30.220039 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:30.220050 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:30.220060 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:30.220071 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:54:30.220086 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:54:30.220105 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:54:30.220125 | orchestrator | 2026-02-20 02:54:30.220156 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 02:54:30.220177 | orchestrator | Friday 20 February 2026 02:54:03 +0000 (0:00:00.603) 0:08:07.120 ******* 2026-02-20 02:54:30.220196 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:30.220216 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:30.220234 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:30.220253 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:54:30.220273 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:54:30.220294 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:54:30.220327 | orchestrator | 2026-02-20 02:54:30.220348 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 02:54:30.220367 | orchestrator | Friday 20 February 2026 02:54:04 +0000 (0:00:00.824) 0:08:07.945 ******* 2026-02-20 02:54:30.220386 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:30.220400 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:30.220412 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:30.220425 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:54:30.220437 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:54:30.220473 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:54:30.220486 | orchestrator | 2026-02-20 02:54:30.220497 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 02:54:30.220508 | orchestrator | Friday 20 February 2026 02:54:04 +0000 (0:00:00.596) 0:08:08.541 ******* 2026-02-20 02:54:30.220518 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:30.220529 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:30.220539 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:30.220550 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:54:30.220561 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:54:30.220571 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:54:30.220581 | orchestrator | 2026-02-20 02:54:30.220592 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 02:54:30.220603 | orchestrator | Friday 20 February 2026 02:54:05 +0000 (0:00:00.801) 0:08:09.343 ******* 2026-02-20 02:54:30.220614 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:30.220625 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:30.220635 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:30.220646 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:54:30.220656 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:54:30.220666 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:54:30.220677 | orchestrator | 2026-02-20 02:54:30.220688 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 02:54:30.220698 | orchestrator | Friday 20 February 2026 02:54:06 +0000 (0:00:00.593) 0:08:09.937 ******* 2026-02-20 02:54:30.220709 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:30.220719 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:30.220730 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:30.220740 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:54:30.220751 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:54:30.220761 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:54:30.220772 | orchestrator | 2026-02-20 02:54:30.220783 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 02:54:30.220793 | orchestrator | Friday 20 February 2026 02:54:07 +0000 (0:00:00.790) 0:08:10.727 ******* 2026-02-20 02:54:30.220804 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:30.220814 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:30.220825 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:30.220835 | orchestrator | skipping: [testbed-node-0] 2026-02-20 02:54:30.220864 | orchestrator | skipping: [testbed-node-1] 2026-02-20 02:54:30.220876 | orchestrator | skipping: [testbed-node-2] 2026-02-20 02:54:30.220886 | orchestrator | 2026-02-20 02:54:30.220897 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 02:54:30.220908 | orchestrator | Friday 20 February 2026 02:54:07 +0000 (0:00:00.575) 0:08:11.303 ******* 2026-02-20 02:54:30.220927 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:30.220945 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:30.220964 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:30.220982 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:54:30.221000 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:54:30.221018 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:54:30.221037 | orchestrator | 2026-02-20 02:54:30.221057 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 02:54:30.221104 | orchestrator | Friday 20 February 2026 02:54:08 +0000 (0:00:00.800) 0:08:12.104 ******* 2026-02-20 02:54:30.221119 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:30.221129 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:30.221140 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:30.221150 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:54:30.221161 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:54:30.221171 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:54:30.221181 | orchestrator | 2026-02-20 02:54:30.221192 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 02:54:30.221203 | orchestrator | Friday 20 February 2026 02:54:09 +0000 (0:00:00.723) 0:08:12.827 ******* 2026-02-20 02:54:30.221276 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:30.221288 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:30.221299 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:30.221310 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:54:30.221321 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:54:30.221331 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:54:30.221342 | orchestrator | 2026-02-20 02:54:30.221352 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-20 02:54:30.221363 | orchestrator | Friday 20 February 2026 02:54:10 +0000 (0:00:01.268) 0:08:14.096 ******* 2026-02-20 02:54:30.221374 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 02:54:30.221385 | orchestrator | 2026-02-20 02:54:30.221396 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-20 02:54:30.221407 | orchestrator | Friday 20 February 2026 02:54:14 +0000 (0:00:04.045) 0:08:18.141 ******* 2026-02-20 02:54:30.221418 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 02:54:30.221429 | orchestrator | 2026-02-20 02:54:30.221439 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-20 02:54:30.221450 | orchestrator | Friday 20 February 2026 02:54:16 +0000 (0:00:01.979) 0:08:20.120 ******* 2026-02-20 02:54:30.221461 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:54:30.221471 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:54:30.221482 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:54:30.221493 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:54:30.221503 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:54:30.221514 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:54:30.221524 | orchestrator | 2026-02-20 02:54:30.221535 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-20 02:54:30.221546 | orchestrator | Friday 20 February 2026 02:54:18 +0000 (0:00:01.790) 0:08:21.911 ******* 2026-02-20 02:54:30.221556 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:54:30.221567 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:54:30.221578 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:54:30.221588 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:54:30.221599 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:54:30.221609 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:54:30.221620 | orchestrator | 2026-02-20 02:54:30.221631 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-20 02:54:30.221641 | orchestrator | Friday 20 February 2026 02:54:19 +0000 (0:00:01.244) 0:08:23.155 ******* 2026-02-20 02:54:30.221653 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:54:30.221666 | orchestrator | 2026-02-20 02:54:30.221676 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-20 02:54:30.221687 | orchestrator | Friday 20 February 2026 02:54:20 +0000 (0:00:01.251) 0:08:24.407 ******* 2026-02-20 02:54:30.221698 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:54:30.221708 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:54:30.221719 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:54:30.221730 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:54:30.221740 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:54:30.221751 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:54:30.221761 | orchestrator | 2026-02-20 02:54:30.221772 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-20 02:54:30.221788 | orchestrator | Friday 20 February 2026 02:54:22 +0000 (0:00:01.552) 0:08:25.959 ******* 2026-02-20 02:54:30.221799 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:54:30.221810 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:54:30.221820 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:54:30.221831 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:54:30.221841 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:54:30.221909 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:54:30.221922 | orchestrator | 2026-02-20 02:54:30.221933 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-20 02:54:30.221944 | orchestrator | Friday 20 February 2026 02:54:25 +0000 (0:00:03.514) 0:08:29.473 ******* 2026-02-20 02:54:30.221955 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 02:54:30.221966 | orchestrator | 2026-02-20 02:54:30.221976 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-20 02:54:30.221987 | orchestrator | Friday 20 February 2026 02:54:27 +0000 (0:00:01.207) 0:08:30.680 ******* 2026-02-20 02:54:30.221998 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:30.222008 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:30.222097 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:30.222109 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:54:30.222119 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:54:30.222130 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:54:30.222140 | orchestrator | 2026-02-20 02:54:30.222151 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-20 02:54:30.222162 | orchestrator | Friday 20 February 2026 02:54:27 +0000 (0:00:00.624) 0:08:31.305 ******* 2026-02-20 02:54:30.222173 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:54:30.222183 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:54:30.222194 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:54:30.222205 | orchestrator | changed: [testbed-node-0] 2026-02-20 02:54:30.222216 | orchestrator | changed: [testbed-node-1] 2026-02-20 02:54:30.222226 | orchestrator | changed: [testbed-node-2] 2026-02-20 02:54:30.222237 | orchestrator | 2026-02-20 02:54:30.222247 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-20 02:54:30.222270 | orchestrator | Friday 20 February 2026 02:54:30 +0000 (0:00:02.440) 0:08:33.746 ******* 2026-02-20 02:54:57.461758 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:57.461841 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:57.461848 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:57.461852 | orchestrator | ok: [testbed-node-0] 2026-02-20 02:54:57.461857 | orchestrator | ok: [testbed-node-1] 2026-02-20 02:54:57.461861 | orchestrator | ok: [testbed-node-2] 2026-02-20 02:54:57.461866 | orchestrator | 2026-02-20 02:54:57.461871 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-20 02:54:57.461876 | orchestrator | 2026-02-20 02:54:57.461881 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 02:54:57.461885 | orchestrator | Friday 20 February 2026 02:54:31 +0000 (0:00:00.832) 0:08:34.579 ******* 2026-02-20 02:54:57.461890 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:54:57.461921 | orchestrator | 2026-02-20 02:54:57.461925 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 02:54:57.461929 | orchestrator | Friday 20 February 2026 02:54:31 +0000 (0:00:00.768) 0:08:35.347 ******* 2026-02-20 02:54:57.461933 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:54:57.461937 | orchestrator | 2026-02-20 02:54:57.461941 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 02:54:57.461945 | orchestrator | Friday 20 February 2026 02:54:32 +0000 (0:00:00.522) 0:08:35.869 ******* 2026-02-20 02:54:57.461949 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:57.461954 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:57.461958 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:57.461961 | orchestrator | 2026-02-20 02:54:57.461965 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 02:54:57.461969 | orchestrator | Friday 20 February 2026 02:54:32 +0000 (0:00:00.481) 0:08:36.350 ******* 2026-02-20 02:54:57.461973 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:57.461991 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:57.461995 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:57.461999 | orchestrator | 2026-02-20 02:54:57.462003 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 02:54:57.462007 | orchestrator | Friday 20 February 2026 02:54:33 +0000 (0:00:00.713) 0:08:37.064 ******* 2026-02-20 02:54:57.462044 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:57.462048 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:57.462052 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:57.462056 | orchestrator | 2026-02-20 02:54:57.462060 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 02:54:57.462064 | orchestrator | Friday 20 February 2026 02:54:34 +0000 (0:00:00.749) 0:08:37.814 ******* 2026-02-20 02:54:57.462067 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:57.462071 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:57.462075 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:57.462079 | orchestrator | 2026-02-20 02:54:57.462082 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 02:54:57.462086 | orchestrator | Friday 20 February 2026 02:54:34 +0000 (0:00:00.686) 0:08:38.500 ******* 2026-02-20 02:54:57.462090 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:57.462094 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:57.462097 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:57.462101 | orchestrator | 2026-02-20 02:54:57.462105 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 02:54:57.462109 | orchestrator | Friday 20 February 2026 02:54:35 +0000 (0:00:00.562) 0:08:39.063 ******* 2026-02-20 02:54:57.462112 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:57.462116 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:57.462120 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:57.462124 | orchestrator | 2026-02-20 02:54:57.462127 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 02:54:57.462140 | orchestrator | Friday 20 February 2026 02:54:35 +0000 (0:00:00.290) 0:08:39.354 ******* 2026-02-20 02:54:57.462144 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:57.462148 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:57.462152 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:57.462156 | orchestrator | 2026-02-20 02:54:57.462160 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 02:54:57.462163 | orchestrator | Friday 20 February 2026 02:54:36 +0000 (0:00:00.291) 0:08:39.645 ******* 2026-02-20 02:54:57.462167 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:57.462171 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:57.462175 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:57.462178 | orchestrator | 2026-02-20 02:54:57.462182 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 02:54:57.462186 | orchestrator | Friday 20 February 2026 02:54:37 +0000 (0:00:00.943) 0:08:40.589 ******* 2026-02-20 02:54:57.462189 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:57.462193 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:57.462197 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:57.462201 | orchestrator | 2026-02-20 02:54:57.462204 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 02:54:57.462208 | orchestrator | Friday 20 February 2026 02:54:37 +0000 (0:00:00.730) 0:08:41.319 ******* 2026-02-20 02:54:57.462212 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:57.462216 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:57.462219 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:57.462223 | orchestrator | 2026-02-20 02:54:57.462227 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 02:54:57.462230 | orchestrator | Friday 20 February 2026 02:54:38 +0000 (0:00:00.315) 0:08:41.635 ******* 2026-02-20 02:54:57.462234 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:57.462238 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:57.462241 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:57.462249 | orchestrator | 2026-02-20 02:54:57.462253 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 02:54:57.462257 | orchestrator | Friday 20 February 2026 02:54:38 +0000 (0:00:00.326) 0:08:41.961 ******* 2026-02-20 02:54:57.462261 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:57.462265 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:57.462268 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:57.462272 | orchestrator | 2026-02-20 02:54:57.462285 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 02:54:57.462290 | orchestrator | Friday 20 February 2026 02:54:38 +0000 (0:00:00.565) 0:08:42.526 ******* 2026-02-20 02:54:57.462293 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:57.462297 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:57.462301 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:57.462305 | orchestrator | 2026-02-20 02:54:57.462309 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 02:54:57.462312 | orchestrator | Friday 20 February 2026 02:54:39 +0000 (0:00:00.401) 0:08:42.928 ******* 2026-02-20 02:54:57.462316 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:57.462320 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:57.462323 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:57.462327 | orchestrator | 2026-02-20 02:54:57.462331 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 02:54:57.462336 | orchestrator | Friday 20 February 2026 02:54:39 +0000 (0:00:00.317) 0:08:43.245 ******* 2026-02-20 02:54:57.462340 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:57.462344 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:57.462349 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:57.462353 | orchestrator | 2026-02-20 02:54:57.462357 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 02:54:57.462361 | orchestrator | Friday 20 February 2026 02:54:39 +0000 (0:00:00.292) 0:08:43.538 ******* 2026-02-20 02:54:57.462366 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:57.462370 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:57.462375 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:57.462379 | orchestrator | 2026-02-20 02:54:57.462383 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 02:54:57.462387 | orchestrator | Friday 20 February 2026 02:54:40 +0000 (0:00:00.573) 0:08:44.111 ******* 2026-02-20 02:54:57.462392 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:57.462396 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:57.462400 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:57.462405 | orchestrator | 2026-02-20 02:54:57.462409 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 02:54:57.462413 | orchestrator | Friday 20 February 2026 02:54:40 +0000 (0:00:00.327) 0:08:44.439 ******* 2026-02-20 02:54:57.462417 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:57.462422 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:57.462426 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:57.462430 | orchestrator | 2026-02-20 02:54:57.462434 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 02:54:57.462439 | orchestrator | Friday 20 February 2026 02:54:41 +0000 (0:00:00.341) 0:08:44.780 ******* 2026-02-20 02:54:57.462443 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:54:57.462447 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:54:57.462451 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:54:57.462456 | orchestrator | 2026-02-20 02:54:57.462460 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-20 02:54:57.462464 | orchestrator | Friday 20 February 2026 02:54:41 +0000 (0:00:00.755) 0:08:45.535 ******* 2026-02-20 02:54:57.462469 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:54:57.462473 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:54:57.462478 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-20 02:54:57.462482 | orchestrator | 2026-02-20 02:54:57.462490 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-20 02:54:57.462494 | orchestrator | Friday 20 February 2026 02:54:42 +0000 (0:00:00.403) 0:08:45.939 ******* 2026-02-20 02:54:57.462499 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 02:54:57.462503 | orchestrator | 2026-02-20 02:54:57.462508 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-20 02:54:57.462512 | orchestrator | Friday 20 February 2026 02:54:44 +0000 (0:00:02.283) 0:08:48.222 ******* 2026-02-20 02:54:57.462520 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-20 02:54:57.462526 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:54:57.462530 | orchestrator | 2026-02-20 02:54:57.462534 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-20 02:54:57.462539 | orchestrator | Friday 20 February 2026 02:54:44 +0000 (0:00:00.213) 0:08:48.436 ******* 2026-02-20 02:54:57.462545 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-20 02:54:57.462555 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-20 02:54:57.462560 | orchestrator | 2026-02-20 02:54:57.462564 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-20 02:54:57.462569 | orchestrator | Friday 20 February 2026 02:54:53 +0000 (0:00:08.184) 0:08:56.621 ******* 2026-02-20 02:54:57.462573 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 02:54:57.462577 | orchestrator | 2026-02-20 02:54:57.462582 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-20 02:54:57.462586 | orchestrator | Friday 20 February 2026 02:54:56 +0000 (0:00:03.608) 0:09:00.229 ******* 2026-02-20 02:54:57.462590 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:54:57.462595 | orchestrator | 2026-02-20 02:54:57.462602 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-20 02:55:24.099207 | orchestrator | Friday 20 February 2026 02:54:57 +0000 (0:00:00.766) 0:09:00.996 ******* 2026-02-20 02:55:24.099306 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-20 02:55:24.099317 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-20 02:55:24.099324 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-20 02:55:24.099331 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-20 02:55:24.099340 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-20 02:55:24.099347 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-20 02:55:24.099354 | orchestrator | 2026-02-20 02:55:24.099361 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-20 02:55:24.099368 | orchestrator | Friday 20 February 2026 02:54:58 +0000 (0:00:01.043) 0:09:02.039 ******* 2026-02-20 02:55:24.099375 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:55:24.099383 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-20 02:55:24.099390 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 02:55:24.099397 | orchestrator | 2026-02-20 02:55:24.099405 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-20 02:55:24.099411 | orchestrator | Friday 20 February 2026 02:55:00 +0000 (0:00:02.127) 0:09:04.167 ******* 2026-02-20 02:55:24.099436 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-20 02:55:24.099443 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-20 02:55:24.099450 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:55:24.099457 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-20 02:55:24.099463 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-20 02:55:24.099470 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:55:24.099476 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-20 02:55:24.099483 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-20 02:55:24.099489 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:55:24.099496 | orchestrator | 2026-02-20 02:55:24.099503 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-20 02:55:24.099509 | orchestrator | Friday 20 February 2026 02:55:01 +0000 (0:00:01.197) 0:09:05.364 ******* 2026-02-20 02:55:24.099516 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:55:24.099523 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:55:24.099529 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:55:24.099536 | orchestrator | 2026-02-20 02:55:24.099542 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-20 02:55:24.099549 | orchestrator | Friday 20 February 2026 02:55:04 +0000 (0:00:02.998) 0:09:08.363 ******* 2026-02-20 02:55:24.099555 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:55:24.099562 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:55:24.099568 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:55:24.099575 | orchestrator | 2026-02-20 02:55:24.099581 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-20 02:55:24.099588 | orchestrator | Friday 20 February 2026 02:55:05 +0000 (0:00:00.335) 0:09:08.698 ******* 2026-02-20 02:55:24.099595 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:55:24.099602 | orchestrator | 2026-02-20 02:55:24.099609 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-20 02:55:24.099615 | orchestrator | Friday 20 February 2026 02:55:05 +0000 (0:00:00.568) 0:09:09.267 ******* 2026-02-20 02:55:24.099633 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:55:24.099641 | orchestrator | 2026-02-20 02:55:24.099648 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-20 02:55:24.099654 | orchestrator | Friday 20 February 2026 02:55:06 +0000 (0:00:00.773) 0:09:10.040 ******* 2026-02-20 02:55:24.099661 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:55:24.099667 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:55:24.099674 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:55:24.099680 | orchestrator | 2026-02-20 02:55:24.099687 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-20 02:55:24.099693 | orchestrator | Friday 20 February 2026 02:55:07 +0000 (0:00:01.279) 0:09:11.320 ******* 2026-02-20 02:55:24.099700 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:55:24.099710 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:55:24.099721 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:55:24.099735 | orchestrator | 2026-02-20 02:55:24.099751 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-20 02:55:24.099763 | orchestrator | Friday 20 February 2026 02:55:09 +0000 (0:00:01.428) 0:09:12.749 ******* 2026-02-20 02:55:24.099774 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:55:24.099784 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:55:24.099795 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:55:24.099806 | orchestrator | 2026-02-20 02:55:24.099815 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-20 02:55:24.099826 | orchestrator | Friday 20 February 2026 02:55:11 +0000 (0:00:01.829) 0:09:14.578 ******* 2026-02-20 02:55:24.099836 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:55:24.099856 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:55:24.099867 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:55:24.099879 | orchestrator | 2026-02-20 02:55:24.099890 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-20 02:55:24.099901 | orchestrator | Friday 20 February 2026 02:55:12 +0000 (0:00:01.968) 0:09:16.547 ******* 2026-02-20 02:55:24.099913 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:55:24.099947 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:55:24.099958 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:55:24.099970 | orchestrator | 2026-02-20 02:55:24.099981 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-20 02:55:24.100004 | orchestrator | Friday 20 February 2026 02:55:14 +0000 (0:00:01.514) 0:09:18.061 ******* 2026-02-20 02:55:24.100013 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:55:24.100021 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:55:24.100029 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:55:24.100036 | orchestrator | 2026-02-20 02:55:24.100044 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-20 02:55:24.100051 | orchestrator | Friday 20 February 2026 02:55:15 +0000 (0:00:00.720) 0:09:18.782 ******* 2026-02-20 02:55:24.100059 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:55:24.100067 | orchestrator | 2026-02-20 02:55:24.100075 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-20 02:55:24.100082 | orchestrator | Friday 20 February 2026 02:55:15 +0000 (0:00:00.726) 0:09:19.508 ******* 2026-02-20 02:55:24.100089 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:55:24.100096 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:55:24.100102 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:55:24.100109 | orchestrator | 2026-02-20 02:55:24.100116 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-20 02:55:24.100122 | orchestrator | Friday 20 February 2026 02:55:16 +0000 (0:00:00.328) 0:09:19.837 ******* 2026-02-20 02:55:24.100129 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:55:24.100136 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:55:24.100142 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:55:24.100149 | orchestrator | 2026-02-20 02:55:24.100156 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-20 02:55:24.100162 | orchestrator | Friday 20 February 2026 02:55:17 +0000 (0:00:01.241) 0:09:21.079 ******* 2026-02-20 02:55:24.100169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:55:24.100176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:55:24.100182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:55:24.100189 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:55:24.100196 | orchestrator | 2026-02-20 02:55:24.100202 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-20 02:55:24.100209 | orchestrator | Friday 20 February 2026 02:55:18 +0000 (0:00:00.858) 0:09:21.937 ******* 2026-02-20 02:55:24.100215 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:55:24.100222 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:55:24.100229 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:55:24.100235 | orchestrator | 2026-02-20 02:55:24.100242 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-20 02:55:24.100249 | orchestrator | 2026-02-20 02:55:24.100255 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 02:55:24.100262 | orchestrator | Friday 20 February 2026 02:55:19 +0000 (0:00:00.782) 0:09:22.719 ******* 2026-02-20 02:55:24.100269 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:55:24.100277 | orchestrator | 2026-02-20 02:55:24.100284 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 02:55:24.100290 | orchestrator | Friday 20 February 2026 02:55:19 +0000 (0:00:00.498) 0:09:23.218 ******* 2026-02-20 02:55:24.100303 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:55:24.100310 | orchestrator | 2026-02-20 02:55:24.100316 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 02:55:24.100323 | orchestrator | Friday 20 February 2026 02:55:20 +0000 (0:00:00.735) 0:09:23.954 ******* 2026-02-20 02:55:24.100330 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:55:24.100342 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:55:24.100349 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:55:24.100356 | orchestrator | 2026-02-20 02:55:24.100362 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 02:55:24.100369 | orchestrator | Friday 20 February 2026 02:55:20 +0000 (0:00:00.320) 0:09:24.274 ******* 2026-02-20 02:55:24.100376 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:55:24.100382 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:55:24.100389 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:55:24.100395 | orchestrator | 2026-02-20 02:55:24.100402 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 02:55:24.100409 | orchestrator | Friday 20 February 2026 02:55:21 +0000 (0:00:00.741) 0:09:25.016 ******* 2026-02-20 02:55:24.100415 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:55:24.100422 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:55:24.100429 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:55:24.100435 | orchestrator | 2026-02-20 02:55:24.100442 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 02:55:24.100448 | orchestrator | Friday 20 February 2026 02:55:22 +0000 (0:00:00.728) 0:09:25.745 ******* 2026-02-20 02:55:24.100455 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:55:24.100462 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:55:24.100468 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:55:24.100475 | orchestrator | 2026-02-20 02:55:24.100482 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 02:55:24.100488 | orchestrator | Friday 20 February 2026 02:55:23 +0000 (0:00:01.055) 0:09:26.800 ******* 2026-02-20 02:55:24.100495 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:55:24.100502 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:55:24.100508 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:55:24.100515 | orchestrator | 2026-02-20 02:55:24.100522 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 02:55:24.100528 | orchestrator | Friday 20 February 2026 02:55:23 +0000 (0:00:00.316) 0:09:27.117 ******* 2026-02-20 02:55:24.100535 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:55:24.100542 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:55:24.100548 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:55:24.100555 | orchestrator | 2026-02-20 02:55:24.100561 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 02:55:24.100568 | orchestrator | Friday 20 February 2026 02:55:23 +0000 (0:00:00.334) 0:09:27.452 ******* 2026-02-20 02:55:24.100579 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:55:45.337090 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:55:45.337238 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:55:45.337264 | orchestrator | 2026-02-20 02:55:45.337283 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 02:55:45.337302 | orchestrator | Friday 20 February 2026 02:55:24 +0000 (0:00:00.555) 0:09:28.007 ******* 2026-02-20 02:55:45.337321 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:55:45.337340 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:55:45.337358 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:55:45.337374 | orchestrator | 2026-02-20 02:55:45.337391 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 02:55:45.337408 | orchestrator | Friday 20 February 2026 02:55:25 +0000 (0:00:00.751) 0:09:28.758 ******* 2026-02-20 02:55:45.337425 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:55:45.337443 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:55:45.337491 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:55:45.337510 | orchestrator | 2026-02-20 02:55:45.337528 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 02:55:45.337547 | orchestrator | Friday 20 February 2026 02:55:25 +0000 (0:00:00.729) 0:09:29.487 ******* 2026-02-20 02:55:45.337566 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:55:45.337586 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:55:45.337604 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:55:45.337623 | orchestrator | 2026-02-20 02:55:45.337643 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 02:55:45.337663 | orchestrator | Friday 20 February 2026 02:55:26 +0000 (0:00:00.293) 0:09:29.781 ******* 2026-02-20 02:55:45.337681 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:55:45.337700 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:55:45.337711 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:55:45.337722 | orchestrator | 2026-02-20 02:55:45.337733 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 02:55:45.337743 | orchestrator | Friday 20 February 2026 02:55:26 +0000 (0:00:00.530) 0:09:30.312 ******* 2026-02-20 02:55:45.337754 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:55:45.337765 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:55:45.337776 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:55:45.337786 | orchestrator | 2026-02-20 02:55:45.337802 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 02:55:45.337821 | orchestrator | Friday 20 February 2026 02:55:27 +0000 (0:00:00.348) 0:09:30.660 ******* 2026-02-20 02:55:45.337838 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:55:45.337855 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:55:45.337873 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:55:45.337890 | orchestrator | 2026-02-20 02:55:45.337907 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 02:55:45.337926 | orchestrator | Friday 20 February 2026 02:55:27 +0000 (0:00:00.336) 0:09:30.997 ******* 2026-02-20 02:55:45.337943 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:55:45.337993 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:55:45.338011 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:55:45.338105 | orchestrator | 2026-02-20 02:55:45.338125 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 02:55:45.338146 | orchestrator | Friday 20 February 2026 02:55:27 +0000 (0:00:00.315) 0:09:31.313 ******* 2026-02-20 02:55:45.338165 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:55:45.338184 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:55:45.338196 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:55:45.338206 | orchestrator | 2026-02-20 02:55:45.338217 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 02:55:45.338228 | orchestrator | Friday 20 February 2026 02:55:28 +0000 (0:00:00.535) 0:09:31.849 ******* 2026-02-20 02:55:45.338238 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:55:45.338249 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:55:45.338259 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:55:45.338270 | orchestrator | 2026-02-20 02:55:45.338296 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 02:55:45.338308 | orchestrator | Friday 20 February 2026 02:55:28 +0000 (0:00:00.325) 0:09:32.174 ******* 2026-02-20 02:55:45.338318 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:55:45.338329 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:55:45.338340 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:55:45.338350 | orchestrator | 2026-02-20 02:55:45.338361 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 02:55:45.338372 | orchestrator | Friday 20 February 2026 02:55:28 +0000 (0:00:00.311) 0:09:32.485 ******* 2026-02-20 02:55:45.338382 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:55:45.338393 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:55:45.338404 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:55:45.338427 | orchestrator | 2026-02-20 02:55:45.338438 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 02:55:45.338449 | orchestrator | Friday 20 February 2026 02:55:29 +0000 (0:00:00.339) 0:09:32.825 ******* 2026-02-20 02:55:45.338460 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:55:45.338471 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:55:45.338482 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:55:45.338492 | orchestrator | 2026-02-20 02:55:45.338503 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-20 02:55:45.338514 | orchestrator | Friday 20 February 2026 02:55:30 +0000 (0:00:00.791) 0:09:33.616 ******* 2026-02-20 02:55:45.338525 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:55:45.338537 | orchestrator | 2026-02-20 02:55:45.338548 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-20 02:55:45.338559 | orchestrator | Friday 20 February 2026 02:55:30 +0000 (0:00:00.539) 0:09:34.156 ******* 2026-02-20 02:55:45.338570 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:55:45.338580 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-20 02:55:45.338592 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 02:55:45.338603 | orchestrator | 2026-02-20 02:55:45.338614 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-20 02:55:45.338624 | orchestrator | Friday 20 February 2026 02:55:32 +0000 (0:00:02.368) 0:09:36.524 ******* 2026-02-20 02:55:45.338660 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-20 02:55:45.338672 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-20 02:55:45.338682 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:55:45.338693 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-20 02:55:45.338704 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-20 02:55:45.338714 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:55:45.338725 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-20 02:55:45.338736 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-20 02:55:45.338746 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:55:45.338757 | orchestrator | 2026-02-20 02:55:45.338768 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-20 02:55:45.338778 | orchestrator | Friday 20 February 2026 02:55:34 +0000 (0:00:01.497) 0:09:38.021 ******* 2026-02-20 02:55:45.338789 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:55:45.338800 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:55:45.338811 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:55:45.338821 | orchestrator | 2026-02-20 02:55:45.338835 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-20 02:55:45.338854 | orchestrator | Friday 20 February 2026 02:55:34 +0000 (0:00:00.320) 0:09:38.342 ******* 2026-02-20 02:55:45.338872 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:55:45.338890 | orchestrator | 2026-02-20 02:55:45.338908 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-20 02:55:45.338926 | orchestrator | Friday 20 February 2026 02:55:35 +0000 (0:00:00.529) 0:09:38.871 ******* 2026-02-20 02:55:45.338969 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 02:55:45.338992 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 02:55:45.339010 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 02:55:45.339027 | orchestrator | 2026-02-20 02:55:45.339047 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-20 02:55:45.339081 | orchestrator | Friday 20 February 2026 02:55:36 +0000 (0:00:01.061) 0:09:39.933 ******* 2026-02-20 02:55:45.339100 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:55:45.339117 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-20 02:55:45.339135 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:55:45.339153 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-20 02:55:45.339171 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:55:45.339190 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-20 02:55:45.339207 | orchestrator | 2026-02-20 02:55:45.339234 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-20 02:55:45.339253 | orchestrator | Friday 20 February 2026 02:55:40 +0000 (0:00:04.424) 0:09:44.358 ******* 2026-02-20 02:55:45.339271 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:55:45.339289 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 02:55:45.339307 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:55:45.339325 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 02:55:45.339344 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:55:45.339362 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 02:55:45.339379 | orchestrator | 2026-02-20 02:55:45.339394 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-20 02:55:45.339410 | orchestrator | Friday 20 February 2026 02:55:43 +0000 (0:00:02.200) 0:09:46.558 ******* 2026-02-20 02:55:45.339427 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-20 02:55:45.339445 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:55:45.339462 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-20 02:55:45.339479 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:55:45.339497 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-20 02:55:45.339514 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:55:45.339533 | orchestrator | 2026-02-20 02:55:45.339551 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-20 02:55:45.339569 | orchestrator | Friday 20 February 2026 02:55:44 +0000 (0:00:01.499) 0:09:48.057 ******* 2026-02-20 02:55:45.339587 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-20 02:55:45.339604 | orchestrator | 2026-02-20 02:55:45.339620 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-20 02:55:45.339637 | orchestrator | Friday 20 February 2026 02:55:44 +0000 (0:00:00.229) 0:09:48.287 ******* 2026-02-20 02:55:45.339655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 02:55:45.339686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 02:56:29.168747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 02:56:29.168862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 02:56:29.168876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 02:56:29.168887 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:29.168921 | orchestrator | 2026-02-20 02:56:29.168932 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-20 02:56:29.168944 | orchestrator | Friday 20 February 2026 02:55:45 +0000 (0:00:00.583) 0:09:48.871 ******* 2026-02-20 02:56:29.168954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 02:56:29.168964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 02:56:29.168974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 02:56:29.168984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 02:56:29.169067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 02:56:29.169085 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:29.169101 | orchestrator | 2026-02-20 02:56:29.169115 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-20 02:56:29.169126 | orchestrator | Friday 20 February 2026 02:55:45 +0000 (0:00:00.605) 0:09:49.476 ******* 2026-02-20 02:56:29.169135 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 02:56:29.169147 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 02:56:29.169156 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 02:56:29.169166 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 02:56:29.169175 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 02:56:29.169185 | orchestrator | 2026-02-20 02:56:29.169212 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-20 02:56:29.169230 | orchestrator | Friday 20 February 2026 02:56:17 +0000 (0:00:31.270) 0:10:20.746 ******* 2026-02-20 02:56:29.169245 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:29.169261 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:29.169276 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:56:29.169291 | orchestrator | 2026-02-20 02:56:29.169305 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-20 02:56:29.169317 | orchestrator | Friday 20 February 2026 02:56:17 +0000 (0:00:00.308) 0:10:21.055 ******* 2026-02-20 02:56:29.169328 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:29.169339 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:29.169349 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:56:29.169360 | orchestrator | 2026-02-20 02:56:29.169371 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-20 02:56:29.169382 | orchestrator | Friday 20 February 2026 02:56:17 +0000 (0:00:00.285) 0:10:21.341 ******* 2026-02-20 02:56:29.169394 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:56:29.169405 | orchestrator | 2026-02-20 02:56:29.169416 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-20 02:56:29.169426 | orchestrator | Friday 20 February 2026 02:56:18 +0000 (0:00:00.783) 0:10:22.125 ******* 2026-02-20 02:56:29.169437 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:56:29.169449 | orchestrator | 2026-02-20 02:56:29.169471 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-20 02:56:29.169482 | orchestrator | Friday 20 February 2026 02:56:19 +0000 (0:00:00.508) 0:10:22.633 ******* 2026-02-20 02:56:29.169493 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:56:29.169503 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:56:29.169514 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:56:29.169525 | orchestrator | 2026-02-20 02:56:29.169536 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-20 02:56:29.169546 | orchestrator | Friday 20 February 2026 02:56:20 +0000 (0:00:01.519) 0:10:24.153 ******* 2026-02-20 02:56:29.169556 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:56:29.169565 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:56:29.169575 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:56:29.169585 | orchestrator | 2026-02-20 02:56:29.169594 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-20 02:56:29.169623 | orchestrator | Friday 20 February 2026 02:56:21 +0000 (0:00:01.186) 0:10:25.340 ******* 2026-02-20 02:56:29.169633 | orchestrator | changed: [testbed-node-3] 2026-02-20 02:56:29.169643 | orchestrator | changed: [testbed-node-5] 2026-02-20 02:56:29.169652 | orchestrator | changed: [testbed-node-4] 2026-02-20 02:56:29.169662 | orchestrator | 2026-02-20 02:56:29.169671 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-20 02:56:29.169681 | orchestrator | Friday 20 February 2026 02:56:23 +0000 (0:00:01.783) 0:10:27.123 ******* 2026-02-20 02:56:29.169691 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 02:56:29.169700 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 02:56:29.169710 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 02:56:29.169720 | orchestrator | 2026-02-20 02:56:29.169729 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-20 02:56:29.169739 | orchestrator | Friday 20 February 2026 02:56:26 +0000 (0:00:02.602) 0:10:29.725 ******* 2026-02-20 02:56:29.169748 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:29.169758 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:29.169767 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:56:29.169776 | orchestrator | 2026-02-20 02:56:29.169786 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-20 02:56:29.169796 | orchestrator | Friday 20 February 2026 02:56:26 +0000 (0:00:00.337) 0:10:30.062 ******* 2026-02-20 02:56:29.169805 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:56:29.169815 | orchestrator | 2026-02-20 02:56:29.169825 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-20 02:56:29.169834 | orchestrator | Friday 20 February 2026 02:56:27 +0000 (0:00:00.720) 0:10:30.782 ******* 2026-02-20 02:56:29.169844 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:56:29.169854 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:56:29.169864 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:56:29.169873 | orchestrator | 2026-02-20 02:56:29.169883 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-20 02:56:29.169892 | orchestrator | Friday 20 February 2026 02:56:27 +0000 (0:00:00.311) 0:10:31.094 ******* 2026-02-20 02:56:29.169902 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:29.169911 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:29.169921 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:56:29.169930 | orchestrator | 2026-02-20 02:56:29.169940 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-20 02:56:29.169949 | orchestrator | Friday 20 February 2026 02:56:27 +0000 (0:00:00.333) 0:10:31.428 ******* 2026-02-20 02:56:29.169959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:56:29.169975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:56:29.169985 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:56:29.170085 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:29.170096 | orchestrator | 2026-02-20 02:56:29.170106 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-20 02:56:29.170122 | orchestrator | Friday 20 February 2026 02:56:28 +0000 (0:00:00.801) 0:10:32.229 ******* 2026-02-20 02:56:29.170133 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:56:29.170142 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:56:29.170152 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:56:29.170169 | orchestrator | 2026-02-20 02:56:29.170179 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:56:29.170189 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-20 02:56:29.170200 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-20 02:56:29.170209 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-20 02:56:29.170219 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-20 02:56:29.170228 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-20 02:56:29.170238 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-20 02:56:29.170248 | orchestrator | 2026-02-20 02:56:29.170257 | orchestrator | 2026-02-20 02:56:29.170267 | orchestrator | 2026-02-20 02:56:29.170277 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:56:29.170286 | orchestrator | Friday 20 February 2026 02:56:29 +0000 (0:00:00.464) 0:10:32.694 ******* 2026-02-20 02:56:29.170296 | orchestrator | =============================================================================== 2026-02-20 02:56:29.170305 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 60.54s 2026-02-20 02:56:29.170314 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.35s 2026-02-20 02:56:29.170324 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.27s 2026-02-20 02:56:29.170341 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.15s 2026-02-20 02:56:29.557141 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.82s 2026-02-20 02:56:29.557263 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.67s 2026-02-20 02:56:29.557279 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.41s 2026-02-20 02:56:29.557291 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.51s 2026-02-20 02:56:29.557302 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.34s 2026-02-20 02:56:29.557312 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.18s 2026-02-20 02:56:29.557323 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.73s 2026-02-20 02:56:29.557335 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.50s 2026-02-20 02:56:29.557346 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.05s 2026-02-20 02:56:29.557356 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.42s 2026-02-20 02:56:29.557367 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.05s 2026-02-20 02:56:29.557406 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.61s 2026-02-20 02:56:29.557417 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.56s 2026-02-20 02:56:29.557428 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.51s 2026-02-20 02:56:29.557447 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.33s 2026-02-20 02:56:29.557466 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.07s 2026-02-20 02:56:31.878318 | orchestrator | 2026-02-20 02:56:31 | INFO  | Task c9e739e5-c540-40b4-b4ed-4a25ef15f896 (ceph-pools) was prepared for execution. 2026-02-20 02:56:31.878418 | orchestrator | 2026-02-20 02:56:31 | INFO  | It takes a moment until task c9e739e5-c540-40b4-b4ed-4a25ef15f896 (ceph-pools) has been started and output is visible here. 2026-02-20 02:56:45.394292 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-20 02:56:45.394404 | orchestrator | 2.16.14 2026-02-20 02:56:45.394421 | orchestrator | 2026-02-20 02:56:45.394433 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-20 02:56:45.394445 | orchestrator | 2026-02-20 02:56:45.394457 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 02:56:45.394468 | orchestrator | Friday 20 February 2026 02:56:36 +0000 (0:00:00.573) 0:00:00.573 ******* 2026-02-20 02:56:45.394479 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:56:45.394491 | orchestrator | 2026-02-20 02:56:45.394502 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 02:56:45.394513 | orchestrator | Friday 20 February 2026 02:56:36 +0000 (0:00:00.601) 0:00:01.175 ******* 2026-02-20 02:56:45.394524 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:56:45.394535 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:56:45.394561 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:56:45.394573 | orchestrator | 2026-02-20 02:56:45.394584 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 02:56:45.394595 | orchestrator | Friday 20 February 2026 02:56:37 +0000 (0:00:00.613) 0:00:01.789 ******* 2026-02-20 02:56:45.394606 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:56:45.394616 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:56:45.394627 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:56:45.394638 | orchestrator | 2026-02-20 02:56:45.394649 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 02:56:45.394660 | orchestrator | Friday 20 February 2026 02:56:37 +0000 (0:00:00.273) 0:00:02.063 ******* 2026-02-20 02:56:45.394670 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:56:45.394681 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:56:45.394692 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:56:45.394702 | orchestrator | 2026-02-20 02:56:45.394713 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 02:56:45.394724 | orchestrator | Friday 20 February 2026 02:56:38 +0000 (0:00:00.813) 0:00:02.876 ******* 2026-02-20 02:56:45.394735 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:56:45.394746 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:56:45.394756 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:56:45.394767 | orchestrator | 2026-02-20 02:56:45.394778 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 02:56:45.394789 | orchestrator | Friday 20 February 2026 02:56:38 +0000 (0:00:00.301) 0:00:03.178 ******* 2026-02-20 02:56:45.394800 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:56:45.394810 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:56:45.394821 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:56:45.394834 | orchestrator | 2026-02-20 02:56:45.394847 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 02:56:45.394860 | orchestrator | Friday 20 February 2026 02:56:39 +0000 (0:00:00.284) 0:00:03.462 ******* 2026-02-20 02:56:45.394874 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:56:45.394910 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:56:45.394922 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:56:45.394934 | orchestrator | 2026-02-20 02:56:45.394947 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 02:56:45.394959 | orchestrator | Friday 20 February 2026 02:56:39 +0000 (0:00:00.335) 0:00:03.798 ******* 2026-02-20 02:56:45.394972 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:45.394985 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:45.394998 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:56:45.395033 | orchestrator | 2026-02-20 02:56:45.395046 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 02:56:45.395059 | orchestrator | Friday 20 February 2026 02:56:39 +0000 (0:00:00.532) 0:00:04.331 ******* 2026-02-20 02:56:45.395071 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:56:45.395083 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:56:45.395095 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:56:45.395108 | orchestrator | 2026-02-20 02:56:45.395120 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 02:56:45.395132 | orchestrator | Friday 20 February 2026 02:56:40 +0000 (0:00:00.308) 0:00:04.639 ******* 2026-02-20 02:56:45.395145 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 02:56:45.395157 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 02:56:45.395169 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 02:56:45.395181 | orchestrator | 2026-02-20 02:56:45.395192 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 02:56:45.395202 | orchestrator | Friday 20 February 2026 02:56:40 +0000 (0:00:00.671) 0:00:05.310 ******* 2026-02-20 02:56:45.395213 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:56:45.395224 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:56:45.395234 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:56:45.395245 | orchestrator | 2026-02-20 02:56:45.395256 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 02:56:45.395266 | orchestrator | Friday 20 February 2026 02:56:41 +0000 (0:00:00.446) 0:00:05.757 ******* 2026-02-20 02:56:45.395277 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 02:56:45.395287 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 02:56:45.395299 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 02:56:45.395310 | orchestrator | 2026-02-20 02:56:45.395321 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 02:56:45.395331 | orchestrator | Friday 20 February 2026 02:56:43 +0000 (0:00:02.125) 0:00:07.882 ******* 2026-02-20 02:56:45.395343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-20 02:56:45.395363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-20 02:56:45.395384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-20 02:56:45.395402 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:45.395419 | orchestrator | 2026-02-20 02:56:45.395456 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 02:56:45.395478 | orchestrator | Friday 20 February 2026 02:56:44 +0000 (0:00:00.639) 0:00:08.522 ******* 2026-02-20 02:56:45.395500 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 02:56:45.395524 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 02:56:45.395542 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 02:56:45.395564 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:45.395575 | orchestrator | 2026-02-20 02:56:45.395586 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 02:56:45.395597 | orchestrator | Friday 20 February 2026 02:56:45 +0000 (0:00:00.951) 0:00:09.474 ******* 2026-02-20 02:56:45.395610 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:45.395624 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:45.395636 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:45.395647 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:45.395658 | orchestrator | 2026-02-20 02:56:45.395669 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 02:56:45.395679 | orchestrator | Friday 20 February 2026 02:56:45 +0000 (0:00:00.161) 0:00:09.635 ******* 2026-02-20 02:56:45.395692 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c9a9a7d69b4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 02:56:42.126269', 'end': '2026-02-20 02:56:42.169184', 'delta': '0:00:00.042915', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9a9a7d69b4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 02:56:45.395707 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b179183cbe33', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 02:56:42.679288', 'end': '2026-02-20 02:56:42.731579', 'delta': '0:00:00.052291', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b179183cbe33'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 02:56:45.395728 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '28a82f95a8fd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 02:56:43.236512', 'end': '2026-02-20 02:56:43.297280', 'delta': '0:00:00.060768', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['28a82f95a8fd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 02:56:51.948616 | orchestrator | 2026-02-20 02:56:51.948742 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 02:56:51.948759 | orchestrator | Friday 20 February 2026 02:56:45 +0000 (0:00:00.192) 0:00:09.828 ******* 2026-02-20 02:56:51.948770 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:56:51.948799 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:56:51.948819 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:56:51.948831 | orchestrator | 2026-02-20 02:56:51.948842 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 02:56:51.948854 | orchestrator | Friday 20 February 2026 02:56:45 +0000 (0:00:00.421) 0:00:10.250 ******* 2026-02-20 02:56:51.948865 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-20 02:56:51.948877 | orchestrator | 2026-02-20 02:56:51.948888 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 02:56:51.948899 | orchestrator | Friday 20 February 2026 02:56:47 +0000 (0:00:01.715) 0:00:11.965 ******* 2026-02-20 02:56:51.948910 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:51.948921 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:51.948932 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:56:51.948955 | orchestrator | 2026-02-20 02:56:51.948966 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 02:56:51.948987 | orchestrator | Friday 20 February 2026 02:56:47 +0000 (0:00:00.280) 0:00:12.246 ******* 2026-02-20 02:56:51.948998 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:51.949076 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:51.949090 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:56:51.949101 | orchestrator | 2026-02-20 02:56:51.949112 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 02:56:51.949123 | orchestrator | Friday 20 February 2026 02:56:48 +0000 (0:00:00.789) 0:00:13.035 ******* 2026-02-20 02:56:51.949134 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:51.949145 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:51.949156 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:56:51.949167 | orchestrator | 2026-02-20 02:56:51.949178 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 02:56:51.949189 | orchestrator | Friday 20 February 2026 02:56:48 +0000 (0:00:00.286) 0:00:13.322 ******* 2026-02-20 02:56:51.949199 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:56:51.949210 | orchestrator | 2026-02-20 02:56:51.949221 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 02:56:51.949232 | orchestrator | Friday 20 February 2026 02:56:49 +0000 (0:00:00.129) 0:00:13.451 ******* 2026-02-20 02:56:51.949243 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:51.949254 | orchestrator | 2026-02-20 02:56:51.949264 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 02:56:51.949275 | orchestrator | Friday 20 February 2026 02:56:49 +0000 (0:00:00.226) 0:00:13.678 ******* 2026-02-20 02:56:51.949286 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:51.949297 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:51.949308 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:56:51.949319 | orchestrator | 2026-02-20 02:56:51.949330 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 02:56:51.949341 | orchestrator | Friday 20 February 2026 02:56:49 +0000 (0:00:00.275) 0:00:13.953 ******* 2026-02-20 02:56:51.949352 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:51.949363 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:51.949373 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:56:51.949409 | orchestrator | 2026-02-20 02:56:51.949420 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 02:56:51.949431 | orchestrator | Friday 20 February 2026 02:56:49 +0000 (0:00:00.301) 0:00:14.255 ******* 2026-02-20 02:56:51.949442 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:51.949453 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:51.949463 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:56:51.949474 | orchestrator | 2026-02-20 02:56:51.949485 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 02:56:51.949496 | orchestrator | Friday 20 February 2026 02:56:50 +0000 (0:00:00.498) 0:00:14.753 ******* 2026-02-20 02:56:51.949507 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:51.949518 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:51.949529 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:56:51.949539 | orchestrator | 2026-02-20 02:56:51.949550 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 02:56:51.949561 | orchestrator | Friday 20 February 2026 02:56:50 +0000 (0:00:00.311) 0:00:15.065 ******* 2026-02-20 02:56:51.949571 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:51.949582 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:51.949593 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:56:51.949603 | orchestrator | 2026-02-20 02:56:51.949614 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 02:56:51.949625 | orchestrator | Friday 20 February 2026 02:56:50 +0000 (0:00:00.306) 0:00:15.371 ******* 2026-02-20 02:56:51.949636 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:51.949646 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:51.949657 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:56:51.949667 | orchestrator | 2026-02-20 02:56:51.949678 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 02:56:51.949689 | orchestrator | Friday 20 February 2026 02:56:51 +0000 (0:00:00.489) 0:00:15.861 ******* 2026-02-20 02:56:51.949700 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:51.949711 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:51.949722 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:56:51.949732 | orchestrator | 2026-02-20 02:56:51.949743 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 02:56:51.949754 | orchestrator | Friday 20 February 2026 02:56:51 +0000 (0:00:00.314) 0:00:16.175 ******* 2026-02-20 02:56:51.949793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f', 'dm-uuid-LVM-N05Drt313VJO8ej73Med2uhl7Q3NfCOLjUHptgJks7soG8vnX2SubfjS5bfpvFW6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:51.949810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2', 'dm-uuid-LVM-rogr1DxRDGs07Ii1eVUD20MBBAvjpXWjQ80pE2aubUFxb6wQZQZD6n80uN1wncJx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:51.949823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:51.949847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:51.949858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:51.949870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:51.949881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:51.949892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:51.949903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:51.949928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.070331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:56:52.070458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8hqq24-gM8B-HGCg-NRBz-x5vV-O3o1-WzQB2w', 'scsi-0QEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737', 'scsi-SQEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:56:52.070474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef', 'dm-uuid-LVM-eZPdyLEJUA7yM8UG04kzUJPVXCx8J9VmGmbUf4SnTTCksjLBnYIEkOOeXJAJtO0T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.070535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-22rEKG-WFfn-O0Qj-YL45-EVdC-t8yv-UVN2TG', 'scsi-0QEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2', 'scsi-SQEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:56:52.070561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd', 'dm-uuid-LVM-uad2V0OpLGdzdU3eEHWgkaxlNp1nXa7g5o0axnC3QdSAlB8AkjlVWSH5X44uoXlU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.070582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25', 'scsi-SQEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:56:52.070595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.070609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:56:52.070621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.070634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.070646 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:52.070658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.070683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.210393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.210517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.210534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.210551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:56:52.210600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UBLLvV-8QQ0-xzoQ-2mnT-QJVN-4rdp-m3Ln3u', 'scsi-0QEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6', 'scsi-SQEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:56:52.210617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2YdZI3-6CQC-uR2v-c72I-I2Jf-rZdP-m7eIYn', 'scsi-0QEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289', 'scsi-SQEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:56:52.210638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca', 'scsi-SQEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:56:52.210650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:56:52.210663 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:52.210696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae', 'dm-uuid-LVM-5ANCf8I6gapO5QY5OQELwuYZqJbHngb8nyP1EwPXTNj49fiX6eCZwsotaWM2Mv9b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.210722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2', 'dm-uuid-LVM-M2C8S8UVlhXUKOUHVYtq26o4q1qe3SqLeECzduiKtPWWIbJabB9IOMLcCGA61b8F'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.210734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.210759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.429481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.429588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.429604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.429617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.429628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.429640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-20 02:56:52.429699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:56:52.429742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tuZx2P-aY2U-pmBQ-dfdb-DpNW-3dKP-AWDCQ4', 'scsi-0QEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57', 'scsi-SQEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:56:52.429756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4WM9YE-fUvA-MoYX-Wgng-L1JX-iz7g-FbCOv0', 'scsi-0QEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9', 'scsi-SQEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:56:52.429769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8', 'scsi-SQEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:56:52.429781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-20 02:56:52.429794 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:56:52.429807 | orchestrator | 2026-02-20 02:56:52.429819 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 02:56:52.429831 | orchestrator | Friday 20 February 2026 02:56:52 +0000 (0:00:00.571) 0:00:16.747 ******* 2026-02-20 02:56:52.429858 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f', 'dm-uuid-LVM-N05Drt313VJO8ej73Med2uhl7Q3NfCOLjUHptgJks7soG8vnX2SubfjS5bfpvFW6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.539415 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2', 'dm-uuid-LVM-rogr1DxRDGs07Ii1eVUD20MBBAvjpXWjQ80pE2aubUFxb6wQZQZD6n80uN1wncJx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.539516 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.539533 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.539546 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.539558 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.539609 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.539639 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.539653 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.539665 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.539686 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.539718 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef', 'dm-uuid-LVM-eZPdyLEJUA7yM8UG04kzUJPVXCx8J9VmGmbUf4SnTTCksjLBnYIEkOOeXJAJtO0T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.687937 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8hqq24-gM8B-HGCg-NRBz-x5vV-O3o1-WzQB2w', 'scsi-0QEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737', 'scsi-SQEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.688076 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd', 'dm-uuid-LVM-uad2V0OpLGdzdU3eEHWgkaxlNp1nXa7g5o0axnC3QdSAlB8AkjlVWSH5X44uoXlU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.688094 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-22rEKG-WFfn-O0Qj-YL45-EVdC-t8yv-UVN2TG', 'scsi-0QEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2', 'scsi-SQEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.688143 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.688173 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25', 'scsi-SQEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.688185 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.688195 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.688206 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.688217 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.688239 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.688249 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.688260 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:56:52.688281 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.797981 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.798171 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.798208 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UBLLvV-8QQ0-xzoQ-2mnT-QJVN-4rdp-m3Ln3u', 'scsi-0QEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6', 'scsi-SQEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.798235 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2YdZI3-6CQC-uR2v-c72I-I2Jf-rZdP-m7eIYn', 'scsi-0QEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289', 'scsi-SQEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.798246 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca', 'scsi-SQEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.798264 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.798285 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae', 'dm-uuid-LVM-5ANCf8I6gapO5QY5OQELwuYZqJbHngb8nyP1EwPXTNj49fiX6eCZwsotaWM2Mv9b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.798295 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:56:52.798306 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2', 'dm-uuid-LVM-M2C8S8UVlhXUKOUHVYtq26o4q1qe3SqLeECzduiKtPWWIbJabB9IOMLcCGA61b8F'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.798322 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.804499 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.804545 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.804568 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.804585 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.804594 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.804603 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.804623 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.804639 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.804657 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tuZx2P-aY2U-pmBQ-dfdb-DpNW-3dKP-AWDCQ4', 'scsi-0QEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57', 'scsi-SQEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:56:52.804672 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4WM9YE-fUvA-MoYX-Wgng-L1JX-iz7g-FbCOv0', 'scsi-0QEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9', 'scsi-SQEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:57:02.549626 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8', 'scsi-SQEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:57:02.549783 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-20-01-35-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-20 02:57:02.550689 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:57:02.550725 | orchestrator | 2026-02-20 02:57:02.550746 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 02:57:02.550767 | orchestrator | Friday 20 February 2026 02:56:52 +0000 (0:00:00.625) 0:00:17.372 ******* 2026-02-20 02:57:02.550810 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:57:02.550832 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:57:02.550863 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:57:02.550883 | orchestrator | 2026-02-20 02:57:02.550921 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 02:57:02.550942 | orchestrator | Friday 20 February 2026 02:56:53 +0000 (0:00:00.861) 0:00:18.234 ******* 2026-02-20 02:57:02.550961 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:57:02.550979 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:57:02.550997 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:57:02.551016 | orchestrator | 2026-02-20 02:57:02.551086 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 02:57:02.551107 | orchestrator | Friday 20 February 2026 02:56:54 +0000 (0:00:00.292) 0:00:18.526 ******* 2026-02-20 02:57:02.551126 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:57:02.551143 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:57:02.551163 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:57:02.551180 | orchestrator | 2026-02-20 02:57:02.551199 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 02:57:02.551219 | orchestrator | Friday 20 February 2026 02:56:54 +0000 (0:00:00.625) 0:00:19.152 ******* 2026-02-20 02:57:02.551238 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:57:02.551256 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:57:02.551275 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:57:02.551293 | orchestrator | 2026-02-20 02:57:02.551312 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 02:57:02.551332 | orchestrator | Friday 20 February 2026 02:56:54 +0000 (0:00:00.291) 0:00:19.443 ******* 2026-02-20 02:57:02.551351 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:57:02.551369 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:57:02.551388 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:57:02.551406 | orchestrator | 2026-02-20 02:57:02.551425 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 02:57:02.551443 | orchestrator | Friday 20 February 2026 02:56:55 +0000 (0:00:00.665) 0:00:20.109 ******* 2026-02-20 02:57:02.551462 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:57:02.551480 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:57:02.551499 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:57:02.551516 | orchestrator | 2026-02-20 02:57:02.551535 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 02:57:02.551571 | orchestrator | Friday 20 February 2026 02:56:55 +0000 (0:00:00.315) 0:00:20.424 ******* 2026-02-20 02:57:02.551591 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-20 02:57:02.551610 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-20 02:57:02.551628 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-20 02:57:02.551647 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-20 02:57:02.551666 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-20 02:57:02.551686 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-20 02:57:02.551704 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-20 02:57:02.551722 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-20 02:57:02.551742 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-20 02:57:02.551760 | orchestrator | 2026-02-20 02:57:02.551779 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 02:57:02.551799 | orchestrator | Friday 20 February 2026 02:56:56 +0000 (0:00:00.980) 0:00:21.405 ******* 2026-02-20 02:57:02.551843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-20 02:57:02.551864 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-20 02:57:02.551884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-20 02:57:02.551902 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:57:02.551921 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-20 02:57:02.551940 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-20 02:57:02.551958 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-20 02:57:02.551977 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:57:02.551995 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-20 02:57:02.552013 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-20 02:57:02.552060 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-20 02:57:02.552079 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:57:02.552097 | orchestrator | 2026-02-20 02:57:02.552115 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 02:57:02.552134 | orchestrator | Friday 20 February 2026 02:56:57 +0000 (0:00:00.370) 0:00:21.775 ******* 2026-02-20 02:57:02.552154 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 02:57:02.552173 | orchestrator | 2026-02-20 02:57:02.552191 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 02:57:02.552210 | orchestrator | Friday 20 February 2026 02:56:58 +0000 (0:00:00.700) 0:00:22.475 ******* 2026-02-20 02:57:02.552229 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:57:02.552248 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:57:02.552267 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:57:02.552288 | orchestrator | 2026-02-20 02:57:02.552300 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 02:57:02.552311 | orchestrator | Friday 20 February 2026 02:56:58 +0000 (0:00:00.302) 0:00:22.778 ******* 2026-02-20 02:57:02.552321 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:57:02.552332 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:57:02.552343 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:57:02.552353 | orchestrator | 2026-02-20 02:57:02.552364 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 02:57:02.552375 | orchestrator | Friday 20 February 2026 02:56:58 +0000 (0:00:00.305) 0:00:23.084 ******* 2026-02-20 02:57:02.552386 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:57:02.552397 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:57:02.552407 | orchestrator | skipping: [testbed-node-5] 2026-02-20 02:57:02.552418 | orchestrator | 2026-02-20 02:57:02.552436 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 02:57:02.552457 | orchestrator | Friday 20 February 2026 02:56:59 +0000 (0:00:00.488) 0:00:23.573 ******* 2026-02-20 02:57:02.552468 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:57:02.552479 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:57:02.552490 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:57:02.552501 | orchestrator | 2026-02-20 02:57:02.552511 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 02:57:02.552522 | orchestrator | Friday 20 February 2026 02:56:59 +0000 (0:00:00.432) 0:00:24.005 ******* 2026-02-20 02:57:02.552533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:57:02.552544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:57:02.552554 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:57:02.552565 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:57:02.552576 | orchestrator | 2026-02-20 02:57:02.552587 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 02:57:02.552597 | orchestrator | Friday 20 February 2026 02:56:59 +0000 (0:00:00.369) 0:00:24.375 ******* 2026-02-20 02:57:02.552608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:57:02.552619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:57:02.552630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:57:02.552640 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:57:02.552651 | orchestrator | 2026-02-20 02:57:02.552661 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 02:57:02.552672 | orchestrator | Friday 20 February 2026 02:57:00 +0000 (0:00:00.380) 0:00:24.755 ******* 2026-02-20 02:57:02.552683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 02:57:02.552693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 02:57:02.552704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 02:57:02.552715 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:57:02.552725 | orchestrator | 2026-02-20 02:57:02.552736 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 02:57:02.552746 | orchestrator | Friday 20 February 2026 02:57:00 +0000 (0:00:00.361) 0:00:25.117 ******* 2026-02-20 02:57:02.552757 | orchestrator | ok: [testbed-node-3] 2026-02-20 02:57:02.552768 | orchestrator | ok: [testbed-node-4] 2026-02-20 02:57:02.552778 | orchestrator | ok: [testbed-node-5] 2026-02-20 02:57:02.552789 | orchestrator | 2026-02-20 02:57:02.552800 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 02:57:02.552810 | orchestrator | Friday 20 February 2026 02:57:00 +0000 (0:00:00.325) 0:00:25.443 ******* 2026-02-20 02:57:02.552821 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-20 02:57:02.552832 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-20 02:57:02.552842 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-20 02:57:02.552853 | orchestrator | 2026-02-20 02:57:02.552863 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 02:57:02.552874 | orchestrator | Friday 20 February 2026 02:57:01 +0000 (0:00:00.758) 0:00:26.201 ******* 2026-02-20 02:57:02.552885 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 02:57:02.552905 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 02:58:40.851839 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 02:58:40.851959 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-20 02:58:40.851975 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 02:58:40.851988 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 02:58:40.851999 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 02:58:40.852010 | orchestrator | 2026-02-20 02:58:40.852044 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 02:58:40.852056 | orchestrator | Friday 20 February 2026 02:57:02 +0000 (0:00:00.788) 0:00:26.990 ******* 2026-02-20 02:58:40.852067 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 02:58:40.852078 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 02:58:40.852088 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 02:58:40.852099 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-20 02:58:40.852172 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 02:58:40.852184 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 02:58:40.852194 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 02:58:40.852205 | orchestrator | 2026-02-20 02:58:40.852216 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-20 02:58:40.852227 | orchestrator | Friday 20 February 2026 02:57:04 +0000 (0:00:01.562) 0:00:28.552 ******* 2026-02-20 02:58:40.852238 | orchestrator | skipping: [testbed-node-3] 2026-02-20 02:58:40.852250 | orchestrator | skipping: [testbed-node-4] 2026-02-20 02:58:40.852261 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-20 02:58:40.852272 | orchestrator | 2026-02-20 02:58:40.852283 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-20 02:58:40.852293 | orchestrator | Friday 20 February 2026 02:57:04 +0000 (0:00:00.359) 0:00:28.912 ******* 2026-02-20 02:58:40.852321 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-20 02:58:40.852336 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-20 02:58:40.852347 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-20 02:58:40.852358 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-20 02:58:40.852371 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-20 02:58:40.852384 | orchestrator | 2026-02-20 02:58:40.852397 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-20 02:58:40.852409 | orchestrator | Friday 20 February 2026 02:57:49 +0000 (0:00:44.839) 0:01:13.751 ******* 2026-02-20 02:58:40.852421 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852433 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852446 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852458 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852479 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852491 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852503 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-20 02:58:40.852515 | orchestrator | 2026-02-20 02:58:40.852529 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-20 02:58:40.852541 | orchestrator | Friday 20 February 2026 02:58:12 +0000 (0:00:23.220) 0:01:36.972 ******* 2026-02-20 02:58:40.852571 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852585 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852597 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852609 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852621 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852632 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852645 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 02:58:40.852657 | orchestrator | 2026-02-20 02:58:40.852668 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-20 02:58:40.852681 | orchestrator | Friday 20 February 2026 02:58:23 +0000 (0:00:11.316) 0:01:48.288 ******* 2026-02-20 02:58:40.852693 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852705 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-20 02:58:40.852718 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-20 02:58:40.852731 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852741 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-20 02:58:40.852752 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-20 02:58:40.852763 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852773 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-20 02:58:40.852784 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-20 02:58:40.852795 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852805 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-20 02:58:40.852816 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-20 02:58:40.852827 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852842 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-20 02:58:40.852853 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-20 02:58:40.852864 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 02:58:40.852888 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-20 02:58:40.852899 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-20 02:58:40.852911 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-20 02:58:40.852931 | orchestrator | 2026-02-20 02:58:40.852950 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:58:40.852969 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-20 02:58:40.852989 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-20 02:58:40.853011 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-20 02:58:40.853022 | orchestrator | 2026-02-20 02:58:40.853032 | orchestrator | 2026-02-20 02:58:40.853043 | orchestrator | 2026-02-20 02:58:40.853054 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:58:40.853064 | orchestrator | Friday 20 February 2026 02:58:40 +0000 (0:00:16.979) 0:02:05.268 ******* 2026-02-20 02:58:40.853075 | orchestrator | =============================================================================== 2026-02-20 02:58:40.853085 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.84s 2026-02-20 02:58:40.853097 | orchestrator | generate keys ---------------------------------------------------------- 23.22s 2026-02-20 02:58:40.853132 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.98s 2026-02-20 02:58:40.853143 | orchestrator | get keys from monitors ------------------------------------------------- 11.32s 2026-02-20 02:58:40.853154 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.13s 2026-02-20 02:58:40.853164 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.72s 2026-02-20 02:58:40.853175 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.56s 2026-02-20 02:58:40.853186 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.98s 2026-02-20 02:58:40.853196 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.95s 2026-02-20 02:58:40.853207 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.86s 2026-02-20 02:58:40.853217 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.81s 2026-02-20 02:58:40.853228 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.79s 2026-02-20 02:58:40.853238 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.79s 2026-02-20 02:58:40.853257 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.76s 2026-02-20 02:58:41.116222 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2026-02-20 02:58:41.116294 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.67s 2026-02-20 02:58:41.116300 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.67s 2026-02-20 02:58:41.116305 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.64s 2026-02-20 02:58:41.116309 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.63s 2026-02-20 02:58:41.116314 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.63s 2026-02-20 02:58:43.310914 | orchestrator | 2026-02-20 02:58:43 | INFO  | Task 451dba1a-bafd-40a8-ac24-1ce9f969b20b (copy-ceph-keys) was prepared for execution. 2026-02-20 02:58:43.311007 | orchestrator | 2026-02-20 02:58:43 | INFO  | It takes a moment until task 451dba1a-bafd-40a8-ac24-1ce9f969b20b (copy-ceph-keys) has been started and output is visible here. 2026-02-20 02:59:18.797260 | orchestrator | 2026-02-20 02:59:18.797353 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-20 02:59:18.797363 | orchestrator | 2026-02-20 02:59:18.797370 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-20 02:59:18.797377 | orchestrator | Friday 20 February 2026 02:58:47 +0000 (0:00:00.117) 0:00:00.117 ******* 2026-02-20 02:59:18.797384 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-20 02:59:18.797391 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-20 02:59:18.797398 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-20 02:59:18.797420 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-20 02:59:18.797426 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-20 02:59:18.797433 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-20 02:59:18.797451 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-20 02:59:18.797457 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-20 02:59:18.797464 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-20 02:59:18.797470 | orchestrator | 2026-02-20 02:59:18.797476 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-20 02:59:18.797482 | orchestrator | Friday 20 February 2026 02:58:51 +0000 (0:00:04.283) 0:00:04.401 ******* 2026-02-20 02:59:18.797488 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-20 02:59:18.797495 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-20 02:59:18.797501 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-20 02:59:18.797507 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-20 02:59:18.797513 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-20 02:59:18.797519 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-20 02:59:18.797525 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-20 02:59:18.797531 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-20 02:59:18.797537 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-20 02:59:18.797543 | orchestrator | 2026-02-20 02:59:18.797550 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-20 02:59:18.797556 | orchestrator | Friday 20 February 2026 02:58:55 +0000 (0:00:04.092) 0:00:08.494 ******* 2026-02-20 02:59:18.797563 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-20 02:59:18.797570 | orchestrator | 2026-02-20 02:59:18.797576 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-20 02:59:18.797582 | orchestrator | Friday 20 February 2026 02:58:56 +0000 (0:00:00.946) 0:00:09.440 ******* 2026-02-20 02:59:18.797589 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-20 02:59:18.797595 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-20 02:59:18.797602 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-20 02:59:18.797608 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-20 02:59:18.797614 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-20 02:59:18.797620 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-20 02:59:18.797626 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-20 02:59:18.797632 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-20 02:59:18.797638 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-20 02:59:18.797644 | orchestrator | 2026-02-20 02:59:18.797650 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-20 02:59:18.797656 | orchestrator | Friday 20 February 2026 02:59:09 +0000 (0:00:12.669) 0:00:22.110 ******* 2026-02-20 02:59:18.797667 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-20 02:59:18.797674 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-20 02:59:18.797680 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-20 02:59:18.797687 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-20 02:59:18.797704 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-20 02:59:18.797711 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-20 02:59:18.797717 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-20 02:59:18.797723 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-20 02:59:18.797730 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-20 02:59:18.797736 | orchestrator | 2026-02-20 02:59:18.797742 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-20 02:59:18.797748 | orchestrator | Friday 20 February 2026 02:59:11 +0000 (0:00:02.924) 0:00:25.034 ******* 2026-02-20 02:59:18.797755 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-20 02:59:18.797761 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-20 02:59:18.797768 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-20 02:59:18.797774 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-20 02:59:18.797783 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-20 02:59:18.797791 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-20 02:59:18.797798 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-20 02:59:18.797805 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-20 02:59:18.797812 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-20 02:59:18.797819 | orchestrator | 2026-02-20 02:59:18.797829 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 02:59:18.797839 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 02:59:18.797852 | orchestrator | 2026-02-20 02:59:18.797868 | orchestrator | 2026-02-20 02:59:18.797878 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 02:59:18.797888 | orchestrator | Friday 20 February 2026 02:59:18 +0000 (0:00:06.559) 0:00:31.594 ******* 2026-02-20 02:59:18.797898 | orchestrator | =============================================================================== 2026-02-20 02:59:18.797908 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.67s 2026-02-20 02:59:18.797918 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.56s 2026-02-20 02:59:18.797928 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.28s 2026-02-20 02:59:18.797939 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.09s 2026-02-20 02:59:18.797949 | orchestrator | Check if target directories exist --------------------------------------- 2.92s 2026-02-20 02:59:18.797959 | orchestrator | Create share directory -------------------------------------------------- 0.95s 2026-02-20 02:59:31.143052 | orchestrator | 2026-02-20 02:59:31 | INFO  | Task b48f7060-e579-473f-8386-2ec14abccc94 (cephclient) was prepared for execution. 2026-02-20 02:59:31.143315 | orchestrator | 2026-02-20 02:59:31 | INFO  | It takes a moment until task b48f7060-e579-473f-8386-2ec14abccc94 (cephclient) has been started and output is visible here. 2026-02-20 03:00:28.595760 | orchestrator | 2026-02-20 03:00:28.595908 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-20 03:00:28.595926 | orchestrator | 2026-02-20 03:00:28.595939 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-20 03:00:28.595952 | orchestrator | Friday 20 February 2026 02:59:34 +0000 (0:00:00.171) 0:00:00.171 ******* 2026-02-20 03:00:28.595964 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-20 03:00:28.595977 | orchestrator | 2026-02-20 03:00:28.595988 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-20 03:00:28.595999 | orchestrator | Friday 20 February 2026 02:59:35 +0000 (0:00:00.184) 0:00:00.355 ******* 2026-02-20 03:00:28.596011 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-20 03:00:28.596022 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-20 03:00:28.596034 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-20 03:00:28.596046 | orchestrator | 2026-02-20 03:00:28.596057 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-20 03:00:28.596068 | orchestrator | Friday 20 February 2026 02:59:36 +0000 (0:00:01.090) 0:00:01.446 ******* 2026-02-20 03:00:28.596080 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-20 03:00:28.596091 | orchestrator | 2026-02-20 03:00:28.596102 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-20 03:00:28.596113 | orchestrator | Friday 20 February 2026 02:59:37 +0000 (0:00:01.154) 0:00:02.600 ******* 2026-02-20 03:00:28.596124 | orchestrator | changed: [testbed-manager] 2026-02-20 03:00:28.596136 | orchestrator | 2026-02-20 03:00:28.596147 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-20 03:00:28.596158 | orchestrator | Friday 20 February 2026 02:59:38 +0000 (0:00:00.760) 0:00:03.361 ******* 2026-02-20 03:00:28.596169 | orchestrator | changed: [testbed-manager] 2026-02-20 03:00:28.596180 | orchestrator | 2026-02-20 03:00:28.596219 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-20 03:00:28.596232 | orchestrator | Friday 20 February 2026 02:59:38 +0000 (0:00:00.790) 0:00:04.151 ******* 2026-02-20 03:00:28.596243 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-20 03:00:28.596255 | orchestrator | ok: [testbed-manager] 2026-02-20 03:00:28.596266 | orchestrator | 2026-02-20 03:00:28.596278 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-20 03:00:28.596291 | orchestrator | Friday 20 February 2026 03:00:19 +0000 (0:00:40.101) 0:00:44.252 ******* 2026-02-20 03:00:28.596304 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-20 03:00:28.596318 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-20 03:00:28.596330 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-20 03:00:28.596342 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-20 03:00:28.596355 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-20 03:00:28.596368 | orchestrator | 2026-02-20 03:00:28.596381 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-20 03:00:28.596393 | orchestrator | Friday 20 February 2026 03:00:23 +0000 (0:00:03.978) 0:00:48.231 ******* 2026-02-20 03:00:28.596405 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-20 03:00:28.596417 | orchestrator | 2026-02-20 03:00:28.596445 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-20 03:00:28.596458 | orchestrator | Friday 20 February 2026 03:00:23 +0000 (0:00:00.451) 0:00:48.683 ******* 2026-02-20 03:00:28.596471 | orchestrator | skipping: [testbed-manager] 2026-02-20 03:00:28.596483 | orchestrator | 2026-02-20 03:00:28.596495 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-20 03:00:28.596532 | orchestrator | Friday 20 February 2026 03:00:23 +0000 (0:00:00.131) 0:00:48.814 ******* 2026-02-20 03:00:28.596545 | orchestrator | skipping: [testbed-manager] 2026-02-20 03:00:28.596558 | orchestrator | 2026-02-20 03:00:28.596571 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-20 03:00:28.596583 | orchestrator | Friday 20 February 2026 03:00:24 +0000 (0:00:00.464) 0:00:49.279 ******* 2026-02-20 03:00:28.596595 | orchestrator | changed: [testbed-manager] 2026-02-20 03:00:28.596608 | orchestrator | 2026-02-20 03:00:28.596621 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-20 03:00:28.596633 | orchestrator | Friday 20 February 2026 03:00:25 +0000 (0:00:01.488) 0:00:50.768 ******* 2026-02-20 03:00:28.596644 | orchestrator | changed: [testbed-manager] 2026-02-20 03:00:28.596655 | orchestrator | 2026-02-20 03:00:28.596666 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-20 03:00:28.596676 | orchestrator | Friday 20 February 2026 03:00:26 +0000 (0:00:00.709) 0:00:51.478 ******* 2026-02-20 03:00:28.596687 | orchestrator | changed: [testbed-manager] 2026-02-20 03:00:28.596698 | orchestrator | 2026-02-20 03:00:28.596709 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-20 03:00:28.596720 | orchestrator | Friday 20 February 2026 03:00:26 +0000 (0:00:00.559) 0:00:52.037 ******* 2026-02-20 03:00:28.596731 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-20 03:00:28.596742 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-20 03:00:28.596753 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-20 03:00:28.596764 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-20 03:00:28.596775 | orchestrator | 2026-02-20 03:00:28.596786 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:00:28.596797 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 03:00:28.596809 | orchestrator | 2026-02-20 03:00:28.596819 | orchestrator | 2026-02-20 03:00:28.596848 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:00:28.596860 | orchestrator | Friday 20 February 2026 03:00:28 +0000 (0:00:01.442) 0:00:53.480 ******* 2026-02-20 03:00:28.596871 | orchestrator | =============================================================================== 2026-02-20 03:00:28.596882 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.10s 2026-02-20 03:00:28.596893 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.98s 2026-02-20 03:00:28.596903 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.49s 2026-02-20 03:00:28.596919 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.44s 2026-02-20 03:00:28.596937 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.15s 2026-02-20 03:00:28.596964 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.09s 2026-02-20 03:00:28.596983 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.79s 2026-02-20 03:00:28.597000 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.76s 2026-02-20 03:00:28.597017 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.71s 2026-02-20 03:00:28.597033 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.56s 2026-02-20 03:00:28.597051 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.46s 2026-02-20 03:00:28.597069 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2026-02-20 03:00:28.597087 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.18s 2026-02-20 03:00:28.597106 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-02-20 03:00:30.839555 | orchestrator | 2026-02-20 03:00:30 | INFO  | Task c2c91634-3e06-44d4-b5b0-2e25034ecd90 (ceph-bootstrap-dashboard) was prepared for execution. 2026-02-20 03:00:30.839717 | orchestrator | 2026-02-20 03:00:30 | INFO  | It takes a moment until task c2c91634-3e06-44d4-b5b0-2e25034ecd90 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-02-20 03:01:51.841212 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-20 03:01:51.841382 | orchestrator | 2.16.14 2026-02-20 03:01:51.841402 | orchestrator | 2026-02-20 03:01:51.841414 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-20 03:01:51.841427 | orchestrator | 2026-02-20 03:01:51.841438 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-20 03:01:51.841448 | orchestrator | Friday 20 February 2026 03:00:34 +0000 (0:00:00.267) 0:00:00.267 ******* 2026-02-20 03:01:51.841460 | orchestrator | changed: [testbed-manager] 2026-02-20 03:01:51.841471 | orchestrator | 2026-02-20 03:01:51.841482 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-20 03:01:51.841493 | orchestrator | Friday 20 February 2026 03:00:36 +0000 (0:00:01.370) 0:00:01.637 ******* 2026-02-20 03:01:51.841504 | orchestrator | changed: [testbed-manager] 2026-02-20 03:01:51.841515 | orchestrator | 2026-02-20 03:01:51.841526 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-20 03:01:51.841537 | orchestrator | Friday 20 February 2026 03:00:37 +0000 (0:00:00.981) 0:00:02.619 ******* 2026-02-20 03:01:51.841547 | orchestrator | changed: [testbed-manager] 2026-02-20 03:01:51.841558 | orchestrator | 2026-02-20 03:01:51.841582 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-20 03:01:51.841594 | orchestrator | Friday 20 February 2026 03:00:38 +0000 (0:00:01.036) 0:00:03.655 ******* 2026-02-20 03:01:51.841605 | orchestrator | changed: [testbed-manager] 2026-02-20 03:01:51.841616 | orchestrator | 2026-02-20 03:01:51.841627 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-20 03:01:51.841637 | orchestrator | Friday 20 February 2026 03:00:39 +0000 (0:00:01.149) 0:00:04.805 ******* 2026-02-20 03:01:51.841648 | orchestrator | changed: [testbed-manager] 2026-02-20 03:01:51.841659 | orchestrator | 2026-02-20 03:01:51.841670 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-20 03:01:51.841681 | orchestrator | Friday 20 February 2026 03:00:40 +0000 (0:00:01.040) 0:00:05.846 ******* 2026-02-20 03:01:51.841692 | orchestrator | changed: [testbed-manager] 2026-02-20 03:01:51.841703 | orchestrator | 2026-02-20 03:01:51.841714 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-20 03:01:51.841725 | orchestrator | Friday 20 February 2026 03:00:41 +0000 (0:00:01.024) 0:00:06.871 ******* 2026-02-20 03:01:51.841735 | orchestrator | changed: [testbed-manager] 2026-02-20 03:01:51.841746 | orchestrator | 2026-02-20 03:01:51.841757 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-20 03:01:51.841771 | orchestrator | Friday 20 February 2026 03:00:43 +0000 (0:00:02.126) 0:00:08.997 ******* 2026-02-20 03:01:51.841783 | orchestrator | changed: [testbed-manager] 2026-02-20 03:01:51.841795 | orchestrator | 2026-02-20 03:01:51.841807 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-20 03:01:51.841820 | orchestrator | Friday 20 February 2026 03:00:44 +0000 (0:00:01.146) 0:00:10.143 ******* 2026-02-20 03:01:51.841832 | orchestrator | changed: [testbed-manager] 2026-02-20 03:01:51.841844 | orchestrator | 2026-02-20 03:01:51.841857 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-20 03:01:51.841870 | orchestrator | Friday 20 February 2026 03:01:27 +0000 (0:00:42.271) 0:00:52.415 ******* 2026-02-20 03:01:51.841882 | orchestrator | skipping: [testbed-manager] 2026-02-20 03:01:51.841895 | orchestrator | 2026-02-20 03:01:51.841908 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-20 03:01:51.841920 | orchestrator | 2026-02-20 03:01:51.841932 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-20 03:01:51.841945 | orchestrator | Friday 20 February 2026 03:01:27 +0000 (0:00:00.160) 0:00:52.576 ******* 2026-02-20 03:01:51.841979 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:01:51.841992 | orchestrator | 2026-02-20 03:01:51.842004 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-20 03:01:51.842063 | orchestrator | 2026-02-20 03:01:51.842077 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-20 03:01:51.842089 | orchestrator | Friday 20 February 2026 03:01:38 +0000 (0:00:11.632) 0:01:04.208 ******* 2026-02-20 03:01:51.842103 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:01:51.842115 | orchestrator | 2026-02-20 03:01:51.842128 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-20 03:01:51.842138 | orchestrator | 2026-02-20 03:01:51.842149 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-20 03:01:51.842160 | orchestrator | Friday 20 February 2026 03:01:40 +0000 (0:00:01.231) 0:01:05.439 ******* 2026-02-20 03:01:51.842171 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:01:51.842184 | orchestrator | 2026-02-20 03:01:51.842203 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:01:51.842223 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 03:01:51.842242 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:01:51.842339 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:01:51.842354 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:01:51.842365 | orchestrator | 2026-02-20 03:01:51.842376 | orchestrator | 2026-02-20 03:01:51.842386 | orchestrator | 2026-02-20 03:01:51.842397 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:01:51.842407 | orchestrator | Friday 20 February 2026 03:01:51 +0000 (0:00:11.398) 0:01:16.838 ******* 2026-02-20 03:01:51.842418 | orchestrator | =============================================================================== 2026-02-20 03:01:51.842431 | orchestrator | Create admin user ------------------------------------------------------ 42.27s 2026-02-20 03:01:51.842475 | orchestrator | Restart ceph manager service ------------------------------------------- 24.26s 2026-02-20 03:01:51.842502 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.13s 2026-02-20 03:01:51.842520 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.37s 2026-02-20 03:01:51.842539 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.15s 2026-02-20 03:01:51.842556 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.15s 2026-02-20 03:01:51.842574 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.04s 2026-02-20 03:01:51.842593 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.04s 2026-02-20 03:01:51.842611 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.02s 2026-02-20 03:01:51.842630 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.98s 2026-02-20 03:01:51.842648 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-02-20 03:01:52.099864 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-02-20 03:01:54.060131 | orchestrator | 2026-02-20 03:01:54 | INFO  | Task 423aa5e0-15ad-495c-ac81-8e142ac5cdf3 (keystone) was prepared for execution. 2026-02-20 03:01:54.060206 | orchestrator | 2026-02-20 03:01:54 | INFO  | It takes a moment until task 423aa5e0-15ad-495c-ac81-8e142ac5cdf3 (keystone) has been started and output is visible here. 2026-02-20 03:02:00.138785 | orchestrator | 2026-02-20 03:02:00.138868 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:02:00.138903 | orchestrator | 2026-02-20 03:02:00.138914 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:02:00.138924 | orchestrator | Friday 20 February 2026 03:01:57 +0000 (0:00:00.186) 0:00:00.186 ******* 2026-02-20 03:02:00.138933 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:02:00.138944 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:02:00.138954 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:02:00.138964 | orchestrator | 2026-02-20 03:02:00.138974 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:02:00.138984 | orchestrator | Friday 20 February 2026 03:01:57 +0000 (0:00:00.233) 0:00:00.420 ******* 2026-02-20 03:02:00.138994 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-20 03:02:00.139004 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-20 03:02:00.139013 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-20 03:02:00.139023 | orchestrator | 2026-02-20 03:02:00.139033 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-20 03:02:00.139043 | orchestrator | 2026-02-20 03:02:00.139053 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-20 03:02:00.139063 | orchestrator | Friday 20 February 2026 03:01:58 +0000 (0:00:00.328) 0:00:00.748 ******* 2026-02-20 03:02:00.139072 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:02:00.139083 | orchestrator | 2026-02-20 03:02:00.139093 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-20 03:02:00.139103 | orchestrator | Friday 20 February 2026 03:01:58 +0000 (0:00:00.406) 0:00:01.155 ******* 2026-02-20 03:02:00.139118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 03:02:00.139133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 03:02:00.139171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 03:02:00.139191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-20 03:02:00.139202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-20 03:02:00.139212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-20 03:02:00.139222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-20 03:02:00.139232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-20 03:02:00.139253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-20 03:02:00.139299 | orchestrator | 2026-02-20 03:02:00.139310 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-20 03:02:00.139327 | orchestrator | Friday 20 February 2026 03:02:00 +0000 (0:00:01.480) 0:00:02.635 ******* 2026-02-20 03:02:05.279003 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:02:05.279127 | orchestrator | 2026-02-20 03:02:05.279144 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-20 03:02:05.279157 | orchestrator | Friday 20 February 2026 03:02:00 +0000 (0:00:00.209) 0:00:02.844 ******* 2026-02-20 03:02:05.279169 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:02:05.279180 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:02:05.279191 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:02:05.279202 | orchestrator | 2026-02-20 03:02:05.279263 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-20 03:02:05.279305 | orchestrator | Friday 20 February 2026 03:02:00 +0000 (0:00:00.274) 0:00:03.119 ******* 2026-02-20 03:02:05.279317 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 03:02:05.279328 | orchestrator | 2026-02-20 03:02:05.279339 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-20 03:02:05.279350 | orchestrator | Friday 20 February 2026 03:02:01 +0000 (0:00:00.707) 0:00:03.826 ******* 2026-02-20 03:02:05.279362 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:02:05.279373 | orchestrator | 2026-02-20 03:02:05.279384 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-20 03:02:05.279395 | orchestrator | Friday 20 February 2026 03:02:01 +0000 (0:00:00.491) 0:00:04.318 ******* 2026-02-20 03:02:05.279412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 03:02:05.279429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 03:02:05.279481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 03:02:05.279513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-20 03:02:05.279527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-20 03:02:05.279539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-20 03:02:05.279550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-20 03:02:05.279570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-20 03:02:05.279581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-20 03:02:05.279592 | orchestrator | 2026-02-20 03:02:05.279604 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-20 03:02:05.279615 | orchestrator | Friday 20 February 2026 03:02:04 +0000 (0:00:02.914) 0:00:07.233 ******* 2026-02-20 03:02:05.279635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-20 03:02:06.028348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 03:02:06.028562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 03:02:06.028603 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:02:06.028630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-20 03:02:06.028684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 03:02:06.028698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 03:02:06.028716 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:02:06.028761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-20 03:02:06.028782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 03:02:06.028802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 03:02:06.028814 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:02:06.028825 | orchestrator | 2026-02-20 03:02:06.028837 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-20 03:02:06.028849 | orchestrator | Friday 20 February 2026 03:02:05 +0000 (0:00:00.551) 0:00:07.785 ******* 2026-02-20 03:02:06.028869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-20 03:02:06.028891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 03:02:06.028923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 03:02:09.419088 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:02:09.419173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-20 03:02:09.419211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 03:02:09.419224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 03:02:09.419234 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:02:09.419256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-20 03:02:09.419268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 03:02:09.419353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 03:02:09.419373 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:02:09.419384 | orchestrator | 2026-02-20 03:02:09.419395 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-20 03:02:09.419406 | orchestrator | Friday 20 February 2026 03:02:06 +0000 (0:00:00.742) 0:00:08.527 ******* 2026-02-20 03:02:09.419416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 03:02:09.419433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 03:02:09.419445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 03:02:09.419464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-20 03:02:13.971140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-20 03:02:13.971236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-20 03:02:13.971252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-20 03:02:13.971336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-20 03:02:13.971351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-20 03:02:13.971364 | orchestrator | 2026-02-20 03:02:13.971377 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-20 03:02:13.971390 | orchestrator | Friday 20 February 2026 03:02:09 +0000 (0:00:03.393) 0:00:11.921 ******* 2026-02-20 03:02:13.971421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 03:02:13.971456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 03:02:13.971470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 03:02:13.971489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 03:02:13.971501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 03:02:13.971521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 03:02:17.290798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-20 03:02:17.290903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-20 03:02:17.290930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-20 03:02:17.290952 | orchestrator | 2026-02-20 03:02:17.290976 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-20 03:02:17.290999 | orchestrator | Friday 20 February 2026 03:02:13 +0000 (0:00:04.550) 0:00:16.471 ******* 2026-02-20 03:02:17.291020 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:02:17.291033 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:02:17.291044 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:02:17.291054 | orchestrator | 2026-02-20 03:02:17.291082 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-20 03:02:17.291094 | orchestrator | Friday 20 February 2026 03:02:15 +0000 (0:00:01.314) 0:00:17.786 ******* 2026-02-20 03:02:17.291119 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:02:17.291130 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:02:17.291141 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:02:17.291152 | orchestrator | 2026-02-20 03:02:17.291163 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-20 03:02:17.291174 | orchestrator | Friday 20 February 2026 03:02:15 +0000 (0:00:00.706) 0:00:18.492 ******* 2026-02-20 03:02:17.291185 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:02:17.291196 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:02:17.291216 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:02:17.291236 | orchestrator | 2026-02-20 03:02:17.291256 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-20 03:02:17.291355 | orchestrator | Friday 20 February 2026 03:02:16 +0000 (0:00:00.454) 0:00:18.947 ******* 2026-02-20 03:02:17.291380 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:02:17.291397 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:02:17.291410 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:02:17.291422 | orchestrator | 2026-02-20 03:02:17.291435 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-20 03:02:17.291448 | orchestrator | Friday 20 February 2026 03:02:16 +0000 (0:00:00.307) 0:00:19.254 ******* 2026-02-20 03:02:17.291483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-20 03:02:17.291500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 03:02:17.291514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 03:02:17.291531 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:02:17.291552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-20 03:02:17.291568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 03:02:17.291590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 03:02:17.291604 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:02:17.291627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-20 03:02:35.338464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 03:02:35.338587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 03:02:35.338605 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:02:35.338619 | orchestrator | 2026-02-20 03:02:35.338632 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-20 03:02:35.338644 | orchestrator | Friday 20 February 2026 03:02:17 +0000 (0:00:00.540) 0:00:19.795 ******* 2026-02-20 03:02:35.338655 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:02:35.338689 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:02:35.338701 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:02:35.338727 | orchestrator | 2026-02-20 03:02:35.338740 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-20 03:02:35.338751 | orchestrator | Friday 20 February 2026 03:02:17 +0000 (0:00:00.282) 0:00:20.077 ******* 2026-02-20 03:02:35.338762 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-20 03:02:35.338774 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-20 03:02:35.338785 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-20 03:02:35.338795 | orchestrator | 2026-02-20 03:02:35.338807 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-20 03:02:35.338818 | orchestrator | Friday 20 February 2026 03:02:19 +0000 (0:00:01.743) 0:00:21.820 ******* 2026-02-20 03:02:35.338829 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 03:02:35.338840 | orchestrator | 2026-02-20 03:02:35.338850 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-20 03:02:35.338861 | orchestrator | Friday 20 February 2026 03:02:20 +0000 (0:00:00.873) 0:00:22.694 ******* 2026-02-20 03:02:35.338872 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:02:35.338883 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:02:35.338894 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:02:35.338905 | orchestrator | 2026-02-20 03:02:35.338918 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-20 03:02:35.338930 | orchestrator | Friday 20 February 2026 03:02:20 +0000 (0:00:00.514) 0:00:23.208 ******* 2026-02-20 03:02:35.338943 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-20 03:02:35.338956 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 03:02:35.338969 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-20 03:02:35.338981 | orchestrator | 2026-02-20 03:02:35.338994 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-20 03:02:35.339007 | orchestrator | Friday 20 February 2026 03:02:21 +0000 (0:00:00.967) 0:00:24.175 ******* 2026-02-20 03:02:35.339019 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:02:35.339033 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:02:35.339046 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:02:35.339059 | orchestrator | 2026-02-20 03:02:35.339072 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-20 03:02:35.339085 | orchestrator | Friday 20 February 2026 03:02:22 +0000 (0:00:00.455) 0:00:24.630 ******* 2026-02-20 03:02:35.339098 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-20 03:02:35.339110 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-20 03:02:35.339122 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-20 03:02:35.339135 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-20 03:02:35.339147 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-20 03:02:35.339160 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-20 03:02:35.339173 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-20 03:02:35.339186 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-20 03:02:35.339215 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-20 03:02:35.339229 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-20 03:02:35.339242 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-20 03:02:35.339266 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-20 03:02:35.339287 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-20 03:02:35.339356 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-20 03:02:35.339376 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-20 03:02:35.339396 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-20 03:02:35.339411 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-20 03:02:35.339422 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-20 03:02:35.339433 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-20 03:02:35.339444 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-20 03:02:35.339455 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-20 03:02:35.339466 | orchestrator | 2026-02-20 03:02:35.339477 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-20 03:02:35.339487 | orchestrator | Friday 20 February 2026 03:02:30 +0000 (0:00:08.354) 0:00:32.985 ******* 2026-02-20 03:02:35.339498 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-20 03:02:35.339515 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-20 03:02:35.339526 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-20 03:02:35.339537 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-20 03:02:35.339548 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-20 03:02:35.339559 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-20 03:02:35.339570 | orchestrator | 2026-02-20 03:02:35.339581 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-20 03:02:35.339592 | orchestrator | Friday 20 February 2026 03:02:33 +0000 (0:00:02.617) 0:00:35.602 ******* 2026-02-20 03:02:35.339606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 03:02:35.339630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 03:04:21.467099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-20 03:04:21.467238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-20 03:04:21.467256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-20 03:04:21.467268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-20 03:04:21.467280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-20 03:04:21.467335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-20 03:04:21.467348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-20 03:04:21.467361 | orchestrator | 2026-02-20 03:04:21.467420 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-20 03:04:21.467435 | orchestrator | Friday 20 February 2026 03:02:35 +0000 (0:00:02.235) 0:00:37.838 ******* 2026-02-20 03:04:21.467447 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:04:21.467459 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:04:21.467470 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:04:21.467481 | orchestrator | 2026-02-20 03:04:21.467492 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-20 03:04:21.467503 | orchestrator | Friday 20 February 2026 03:02:35 +0000 (0:00:00.447) 0:00:38.286 ******* 2026-02-20 03:04:21.467514 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:04:21.467524 | orchestrator | 2026-02-20 03:04:21.467535 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-20 03:04:21.467546 | orchestrator | Friday 20 February 2026 03:02:37 +0000 (0:00:02.225) 0:00:40.511 ******* 2026-02-20 03:04:21.467556 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:04:21.467567 | orchestrator | 2026-02-20 03:04:21.467578 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-20 03:04:21.467596 | orchestrator | Friday 20 February 2026 03:02:40 +0000 (0:00:02.245) 0:00:42.757 ******* 2026-02-20 03:04:21.467609 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:04:21.467622 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:04:21.467634 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:04:21.467646 | orchestrator | 2026-02-20 03:04:21.467659 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-20 03:04:21.467671 | orchestrator | Friday 20 February 2026 03:02:41 +0000 (0:00:00.861) 0:00:43.619 ******* 2026-02-20 03:04:21.467683 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:04:21.467696 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:04:21.467709 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:04:21.467722 | orchestrator | 2026-02-20 03:04:21.467735 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-20 03:04:21.467748 | orchestrator | Friday 20 February 2026 03:02:41 +0000 (0:00:00.295) 0:00:43.915 ******* 2026-02-20 03:04:21.467760 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:04:21.467772 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:04:21.467785 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:04:21.467797 | orchestrator | 2026-02-20 03:04:21.467810 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-20 03:04:21.467822 | orchestrator | Friday 20 February 2026 03:02:41 +0000 (0:00:00.469) 0:00:44.384 ******* 2026-02-20 03:04:21.467843 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:04:21.467856 | orchestrator | 2026-02-20 03:04:21.467869 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-20 03:04:21.467881 | orchestrator | Friday 20 February 2026 03:02:56 +0000 (0:00:14.727) 0:00:59.112 ******* 2026-02-20 03:04:21.467894 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:04:21.467906 | orchestrator | 2026-02-20 03:04:21.467919 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-20 03:04:21.467932 | orchestrator | Friday 20 February 2026 03:03:07 +0000 (0:00:10.653) 0:01:09.765 ******* 2026-02-20 03:04:21.467944 | orchestrator | 2026-02-20 03:04:21.467957 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-20 03:04:21.467969 | orchestrator | Friday 20 February 2026 03:03:07 +0000 (0:00:00.065) 0:01:09.831 ******* 2026-02-20 03:04:21.467980 | orchestrator | 2026-02-20 03:04:21.467991 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-20 03:04:21.468002 | orchestrator | Friday 20 February 2026 03:03:07 +0000 (0:00:00.067) 0:01:09.898 ******* 2026-02-20 03:04:21.468013 | orchestrator | 2026-02-20 03:04:21.468024 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-20 03:04:21.468035 | orchestrator | Friday 20 February 2026 03:03:07 +0000 (0:00:00.068) 0:01:09.967 ******* 2026-02-20 03:04:21.468045 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:04:21.468056 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:04:21.468067 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:04:21.468078 | orchestrator | 2026-02-20 03:04:21.468089 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-20 03:04:21.468100 | orchestrator | Friday 20 February 2026 03:03:58 +0000 (0:00:51.059) 0:02:01.026 ******* 2026-02-20 03:04:21.468111 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:04:21.468121 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:04:21.468132 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:04:21.468143 | orchestrator | 2026-02-20 03:04:21.468154 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-20 03:04:21.468165 | orchestrator | Friday 20 February 2026 03:04:08 +0000 (0:00:10.373) 0:02:11.400 ******* 2026-02-20 03:04:21.468176 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:04:21.468187 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:04:21.468198 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:04:21.468208 | orchestrator | 2026-02-20 03:04:21.468219 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-20 03:04:21.468230 | orchestrator | Friday 20 February 2026 03:04:20 +0000 (0:00:12.013) 0:02:23.413 ******* 2026-02-20 03:04:21.468248 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:05:13.210387 | orchestrator | 2026-02-20 03:05:13.210569 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-20 03:05:13.210588 | orchestrator | Friday 20 February 2026 03:04:21 +0000 (0:00:00.557) 0:02:23.971 ******* 2026-02-20 03:05:13.210601 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:05:13.210613 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:05:13.210625 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:05:13.210636 | orchestrator | 2026-02-20 03:05:13.210647 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-20 03:05:13.210659 | orchestrator | Friday 20 February 2026 03:04:22 +0000 (0:00:01.039) 0:02:25.010 ******* 2026-02-20 03:05:13.210670 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:05:13.210682 | orchestrator | 2026-02-20 03:05:13.210693 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-20 03:05:13.210704 | orchestrator | Friday 20 February 2026 03:04:24 +0000 (0:00:01.699) 0:02:26.709 ******* 2026-02-20 03:05:13.210716 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-20 03:05:13.210727 | orchestrator | 2026-02-20 03:05:13.210738 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-20 03:05:13.210773 | orchestrator | Friday 20 February 2026 03:04:36 +0000 (0:00:12.003) 0:02:38.713 ******* 2026-02-20 03:05:13.210785 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-20 03:05:13.210796 | orchestrator | 2026-02-20 03:05:13.210807 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-20 03:05:13.210817 | orchestrator | Friday 20 February 2026 03:05:01 +0000 (0:00:25.437) 0:03:04.151 ******* 2026-02-20 03:05:13.210828 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-20 03:05:13.210841 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-20 03:05:13.210851 | orchestrator | 2026-02-20 03:05:13.210862 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-20 03:05:13.210887 | orchestrator | Friday 20 February 2026 03:05:08 +0000 (0:00:06.774) 0:03:10.925 ******* 2026-02-20 03:05:13.210899 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:05:13.210910 | orchestrator | 2026-02-20 03:05:13.210923 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-20 03:05:13.210936 | orchestrator | Friday 20 February 2026 03:05:08 +0000 (0:00:00.127) 0:03:11.053 ******* 2026-02-20 03:05:13.210948 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:05:13.210961 | orchestrator | 2026-02-20 03:05:13.210974 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-20 03:05:13.210986 | orchestrator | Friday 20 February 2026 03:05:08 +0000 (0:00:00.129) 0:03:11.183 ******* 2026-02-20 03:05:13.210999 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:05:13.211011 | orchestrator | 2026-02-20 03:05:13.211024 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-20 03:05:13.211036 | orchestrator | Friday 20 February 2026 03:05:08 +0000 (0:00:00.122) 0:03:11.305 ******* 2026-02-20 03:05:13.211049 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:05:13.211062 | orchestrator | 2026-02-20 03:05:13.211075 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-20 03:05:13.211087 | orchestrator | Friday 20 February 2026 03:05:09 +0000 (0:00:00.487) 0:03:11.793 ******* 2026-02-20 03:05:13.211100 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:05:13.211113 | orchestrator | 2026-02-20 03:05:13.211126 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-20 03:05:13.211138 | orchestrator | Friday 20 February 2026 03:05:12 +0000 (0:00:03.153) 0:03:14.946 ******* 2026-02-20 03:05:13.211150 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:05:13.211163 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:05:13.211176 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:05:13.211189 | orchestrator | 2026-02-20 03:05:13.211202 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:05:13.211217 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-20 03:05:13.211231 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-20 03:05:13.211244 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-20 03:05:13.211256 | orchestrator | 2026-02-20 03:05:13.211269 | orchestrator | 2026-02-20 03:05:13.211281 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:05:13.211292 | orchestrator | Friday 20 February 2026 03:05:12 +0000 (0:00:00.450) 0:03:15.397 ******* 2026-02-20 03:05:13.211303 | orchestrator | =============================================================================== 2026-02-20 03:05:13.211314 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 51.06s 2026-02-20 03:05:13.211324 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.44s 2026-02-20 03:05:13.211343 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.73s 2026-02-20 03:05:13.211355 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.01s 2026-02-20 03:05:13.211365 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.00s 2026-02-20 03:05:13.211376 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.65s 2026-02-20 03:05:13.211387 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.37s 2026-02-20 03:05:13.211398 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.35s 2026-02-20 03:05:13.211409 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.77s 2026-02-20 03:05:13.211454 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.55s 2026-02-20 03:05:13.211466 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.39s 2026-02-20 03:05:13.211477 | orchestrator | keystone : Creating default user role ----------------------------------- 3.15s 2026-02-20 03:05:13.211495 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 2.91s 2026-02-20 03:05:13.211514 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.62s 2026-02-20 03:05:13.211533 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.25s 2026-02-20 03:05:13.211553 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.24s 2026-02-20 03:05:13.211573 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.23s 2026-02-20 03:05:13.211594 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.74s 2026-02-20 03:05:13.211614 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.70s 2026-02-20 03:05:13.211627 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.48s 2026-02-20 03:05:15.387978 | orchestrator | 2026-02-20 03:05:15 | INFO  | Task 3ff4a0e7-8dfe-4b9d-96ca-9c0d98ecfca6 (placement) was prepared for execution. 2026-02-20 03:05:15.388078 | orchestrator | 2026-02-20 03:05:15 | INFO  | It takes a moment until task 3ff4a0e7-8dfe-4b9d-96ca-9c0d98ecfca6 (placement) has been started and output is visible here. 2026-02-20 03:05:49.819114 | orchestrator | 2026-02-20 03:05:49.819229 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:05:49.819246 | orchestrator | 2026-02-20 03:05:49.819259 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:05:49.819287 | orchestrator | Friday 20 February 2026 03:05:19 +0000 (0:00:00.244) 0:00:00.244 ******* 2026-02-20 03:05:49.819299 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:05:49.819312 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:05:49.819323 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:05:49.819334 | orchestrator | 2026-02-20 03:05:49.819345 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:05:49.819357 | orchestrator | Friday 20 February 2026 03:05:19 +0000 (0:00:00.291) 0:00:00.536 ******* 2026-02-20 03:05:49.819368 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-20 03:05:49.819380 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-20 03:05:49.819390 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-20 03:05:49.819401 | orchestrator | 2026-02-20 03:05:49.819412 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-20 03:05:49.819423 | orchestrator | 2026-02-20 03:05:49.819434 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-20 03:05:49.819445 | orchestrator | Friday 20 February 2026 03:05:20 +0000 (0:00:00.406) 0:00:00.943 ******* 2026-02-20 03:05:49.819521 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:05:49.819532 | orchestrator | 2026-02-20 03:05:49.819544 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-20 03:05:49.819577 | orchestrator | Friday 20 February 2026 03:05:20 +0000 (0:00:00.510) 0:00:01.454 ******* 2026-02-20 03:05:49.819589 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-20 03:05:49.819600 | orchestrator | 2026-02-20 03:05:49.819611 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-20 03:05:49.819621 | orchestrator | Friday 20 February 2026 03:05:24 +0000 (0:00:03.870) 0:00:05.324 ******* 2026-02-20 03:05:49.819632 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-20 03:05:49.819643 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-20 03:05:49.819656 | orchestrator | 2026-02-20 03:05:49.819669 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-20 03:05:49.819681 | orchestrator | Friday 20 February 2026 03:05:31 +0000 (0:00:06.586) 0:00:11.911 ******* 2026-02-20 03:05:49.819693 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-20 03:05:49.819705 | orchestrator | 2026-02-20 03:05:49.819718 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-20 03:05:49.819730 | orchestrator | Friday 20 February 2026 03:05:34 +0000 (0:00:03.938) 0:00:15.850 ******* 2026-02-20 03:05:49.819742 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-20 03:05:49.819754 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-20 03:05:49.819790 | orchestrator | 2026-02-20 03:05:49.819803 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-20 03:05:49.819816 | orchestrator | Friday 20 February 2026 03:05:38 +0000 (0:00:03.875) 0:00:19.726 ******* 2026-02-20 03:05:49.819828 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-20 03:05:49.819841 | orchestrator | 2026-02-20 03:05:49.819853 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-20 03:05:49.819867 | orchestrator | Friday 20 February 2026 03:05:42 +0000 (0:00:03.173) 0:00:22.899 ******* 2026-02-20 03:05:49.819879 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-20 03:05:49.819891 | orchestrator | 2026-02-20 03:05:49.819904 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-20 03:05:49.819916 | orchestrator | Friday 20 February 2026 03:05:45 +0000 (0:00:03.679) 0:00:26.579 ******* 2026-02-20 03:05:49.819929 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:05:49.819942 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:05:49.819968 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:05:49.819981 | orchestrator | 2026-02-20 03:05:49.819993 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-20 03:05:49.820006 | orchestrator | Friday 20 February 2026 03:05:46 +0000 (0:00:00.290) 0:00:26.869 ******* 2026-02-20 03:05:49.820023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 03:05:49.820065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 03:05:49.820090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 03:05:49.820102 | orchestrator | 2026-02-20 03:05:49.820113 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-20 03:05:49.820124 | orchestrator | Friday 20 February 2026 03:05:47 +0000 (0:00:01.132) 0:00:28.002 ******* 2026-02-20 03:05:49.820135 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:05:49.820146 | orchestrator | 2026-02-20 03:05:49.820157 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-20 03:05:49.820168 | orchestrator | Friday 20 February 2026 03:05:47 +0000 (0:00:00.290) 0:00:28.292 ******* 2026-02-20 03:05:49.820179 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:05:49.820189 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:05:49.820200 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:05:49.820211 | orchestrator | 2026-02-20 03:05:49.820222 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-20 03:05:49.820232 | orchestrator | Friday 20 February 2026 03:05:47 +0000 (0:00:00.290) 0:00:28.583 ******* 2026-02-20 03:05:49.820243 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:05:49.820254 | orchestrator | 2026-02-20 03:05:49.820265 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-20 03:05:49.820276 | orchestrator | Friday 20 February 2026 03:05:48 +0000 (0:00:00.517) 0:00:29.100 ******* 2026-02-20 03:05:49.820287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 03:05:49.820315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 03:05:52.613288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 03:05:52.613585 | orchestrator | 2026-02-20 03:05:52.613674 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-20 03:05:52.613698 | orchestrator | Friday 20 February 2026 03:05:49 +0000 (0:00:01.560) 0:00:30.661 ******* 2026-02-20 03:05:52.613719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-20 03:05:52.613741 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:05:52.613763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-20 03:05:52.613782 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:05:52.613924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-20 03:05:52.613941 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:05:52.613954 | orchestrator | 2026-02-20 03:05:52.613966 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-20 03:05:52.613998 | orchestrator | Friday 20 February 2026 03:05:50 +0000 (0:00:00.474) 0:00:31.136 ******* 2026-02-20 03:05:52.614012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-20 03:05:52.614126 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:05:52.614147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-20 03:05:52.614166 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:05:52.614184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-20 03:05:52.614256 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:05:52.614270 | orchestrator | 2026-02-20 03:05:52.614281 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-20 03:05:52.614292 | orchestrator | Friday 20 February 2026 03:05:50 +0000 (0:00:00.665) 0:00:31.801 ******* 2026-02-20 03:05:52.614321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 03:05:52.614347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 03:05:59.463025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 03:05:59.463172 | orchestrator | 2026-02-20 03:05:59.463201 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-20 03:05:59.463223 | orchestrator | Friday 20 February 2026 03:05:52 +0000 (0:00:01.660) 0:00:33.461 ******* 2026-02-20 03:05:59.463244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 03:05:59.463369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 03:05:59.463401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 03:05:59.463414 | orchestrator | 2026-02-20 03:05:59.463425 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-20 03:05:59.463437 | orchestrator | Friday 20 February 2026 03:05:54 +0000 (0:00:02.214) 0:00:35.675 ******* 2026-02-20 03:05:59.463508 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-20 03:05:59.463523 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-20 03:05:59.463537 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-20 03:05:59.463550 | orchestrator | 2026-02-20 03:05:59.463562 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-20 03:05:59.463575 | orchestrator | Friday 20 February 2026 03:05:56 +0000 (0:00:01.479) 0:00:37.155 ******* 2026-02-20 03:05:59.463588 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:05:59.463601 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:05:59.463613 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:05:59.463626 | orchestrator | 2026-02-20 03:05:59.463640 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-20 03:05:59.463653 | orchestrator | Friday 20 February 2026 03:05:57 +0000 (0:00:01.307) 0:00:38.463 ******* 2026-02-20 03:05:59.463664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-20 03:05:59.463686 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:05:59.463697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-20 03:05:59.463708 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:05:59.463725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-20 03:05:59.463737 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:05:59.463748 | orchestrator | 2026-02-20 03:05:59.463759 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-20 03:05:59.463770 | orchestrator | Friday 20 February 2026 03:05:58 +0000 (0:00:00.749) 0:00:39.212 ******* 2026-02-20 03:05:59.463790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 03:06:25.297749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 03:06:25.297930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-20 03:06:25.297953 | orchestrator | 2026-02-20 03:06:25.297967 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-20 03:06:25.297981 | orchestrator | Friday 20 February 2026 03:05:59 +0000 (0:00:01.102) 0:00:40.315 ******* 2026-02-20 03:06:25.297994 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:06:25.298009 | orchestrator | 2026-02-20 03:06:25.298076 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-20 03:06:25.298085 | orchestrator | Friday 20 February 2026 03:06:01 +0000 (0:00:01.987) 0:00:42.303 ******* 2026-02-20 03:06:25.298092 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:06:25.298100 | orchestrator | 2026-02-20 03:06:25.298107 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-20 03:06:25.298114 | orchestrator | Friday 20 February 2026 03:06:03 +0000 (0:00:02.181) 0:00:44.484 ******* 2026-02-20 03:06:25.298121 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:06:25.298128 | orchestrator | 2026-02-20 03:06:25.298136 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-20 03:06:25.298143 | orchestrator | Friday 20 February 2026 03:06:17 +0000 (0:00:13.718) 0:00:58.202 ******* 2026-02-20 03:06:25.298150 | orchestrator | 2026-02-20 03:06:25.298170 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-20 03:06:25.298177 | orchestrator | Friday 20 February 2026 03:06:17 +0000 (0:00:00.067) 0:00:58.269 ******* 2026-02-20 03:06:25.298185 | orchestrator | 2026-02-20 03:06:25.298192 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-20 03:06:25.298199 | orchestrator | Friday 20 February 2026 03:06:17 +0000 (0:00:00.066) 0:00:58.336 ******* 2026-02-20 03:06:25.298206 | orchestrator | 2026-02-20 03:06:25.298214 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-20 03:06:25.298221 | orchestrator | Friday 20 February 2026 03:06:17 +0000 (0:00:00.067) 0:00:58.403 ******* 2026-02-20 03:06:25.298228 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:06:25.298235 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:06:25.298242 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:06:25.298249 | orchestrator | 2026-02-20 03:06:25.298256 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:06:25.298264 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 03:06:25.298285 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-20 03:06:25.298298 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-20 03:06:25.298310 | orchestrator | 2026-02-20 03:06:25.298323 | orchestrator | 2026-02-20 03:06:25.298335 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:06:25.298347 | orchestrator | Friday 20 February 2026 03:06:24 +0000 (0:00:07.435) 0:01:05.839 ******* 2026-02-20 03:06:25.298360 | orchestrator | =============================================================================== 2026-02-20 03:06:25.298373 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.72s 2026-02-20 03:06:25.298408 | orchestrator | placement : Restart placement-api container ----------------------------- 7.44s 2026-02-20 03:06:25.298417 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.59s 2026-02-20 03:06:25.298425 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.94s 2026-02-20 03:06:25.298432 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.88s 2026-02-20 03:06:25.298439 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.87s 2026-02-20 03:06:25.298446 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.68s 2026-02-20 03:06:25.298453 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.17s 2026-02-20 03:06:25.298460 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.21s 2026-02-20 03:06:25.298468 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.18s 2026-02-20 03:06:25.298475 | orchestrator | placement : Creating placement databases -------------------------------- 1.99s 2026-02-20 03:06:25.298532 | orchestrator | placement : Copying over config.json files for services ----------------- 1.66s 2026-02-20 03:06:25.298540 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.56s 2026-02-20 03:06:25.298547 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.48s 2026-02-20 03:06:25.298554 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.31s 2026-02-20 03:06:25.298561 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.13s 2026-02-20 03:06:25.298568 | orchestrator | placement : Check placement containers ---------------------------------- 1.10s 2026-02-20 03:06:25.298575 | orchestrator | placement : Copying over existing policy file --------------------------- 0.75s 2026-02-20 03:06:25.298582 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.67s 2026-02-20 03:06:25.298589 | orchestrator | placement : include_tasks ----------------------------------------------- 0.52s 2026-02-20 03:06:27.543040 | orchestrator | 2026-02-20 03:06:27 | INFO  | Task bcfb7b38-10d9-465b-b444-ea9de58afdc6 (neutron) was prepared for execution. 2026-02-20 03:06:27.543135 | orchestrator | 2026-02-20 03:06:27 | INFO  | It takes a moment until task bcfb7b38-10d9-465b-b444-ea9de58afdc6 (neutron) has been started and output is visible here. 2026-02-20 03:07:12.934455 | orchestrator | 2026-02-20 03:07:12.934615 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:07:12.934632 | orchestrator | 2026-02-20 03:07:12.934644 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:07:12.934655 | orchestrator | Friday 20 February 2026 03:06:31 +0000 (0:00:00.238) 0:00:00.238 ******* 2026-02-20 03:07:12.934666 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:07:12.934677 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:07:12.934687 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:07:12.934696 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:07:12.934706 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:07:12.934716 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:07:12.934726 | orchestrator | 2026-02-20 03:07:12.934759 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:07:12.934769 | orchestrator | Friday 20 February 2026 03:06:32 +0000 (0:00:00.489) 0:00:00.728 ******* 2026-02-20 03:07:12.934779 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-20 03:07:12.934789 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-20 03:07:12.934812 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-20 03:07:12.934823 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-20 03:07:12.934832 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-20 03:07:12.934842 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-20 03:07:12.934852 | orchestrator | 2026-02-20 03:07:12.934862 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-20 03:07:12.934871 | orchestrator | 2026-02-20 03:07:12.934881 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-20 03:07:12.934890 | orchestrator | Friday 20 February 2026 03:06:32 +0000 (0:00:00.461) 0:00:01.189 ******* 2026-02-20 03:07:12.934901 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 03:07:12.934912 | orchestrator | 2026-02-20 03:07:12.934922 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-20 03:07:12.934931 | orchestrator | Friday 20 February 2026 03:06:33 +0000 (0:00:00.932) 0:00:02.122 ******* 2026-02-20 03:07:12.934942 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:07:12.934952 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:07:12.934962 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:07:12.934971 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:07:12.934981 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:07:12.934993 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:07:12.935004 | orchestrator | 2026-02-20 03:07:12.935015 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-20 03:07:12.935026 | orchestrator | Friday 20 February 2026 03:06:34 +0000 (0:00:01.059) 0:00:03.181 ******* 2026-02-20 03:07:12.935037 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:07:12.935048 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:07:12.935059 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:07:12.935084 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:07:12.935106 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:07:12.935118 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:07:12.935129 | orchestrator | 2026-02-20 03:07:12.935140 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-20 03:07:12.935152 | orchestrator | Friday 20 February 2026 03:06:35 +0000 (0:00:00.997) 0:00:04.179 ******* 2026-02-20 03:07:12.935164 | orchestrator | ok: [testbed-node-0] => { 2026-02-20 03:07:12.935176 | orchestrator |  "changed": false, 2026-02-20 03:07:12.935188 | orchestrator |  "msg": "All assertions passed" 2026-02-20 03:07:12.935199 | orchestrator | } 2026-02-20 03:07:12.935210 | orchestrator | ok: [testbed-node-1] => { 2026-02-20 03:07:12.935222 | orchestrator |  "changed": false, 2026-02-20 03:07:12.935233 | orchestrator |  "msg": "All assertions passed" 2026-02-20 03:07:12.935244 | orchestrator | } 2026-02-20 03:07:12.935255 | orchestrator | ok: [testbed-node-2] => { 2026-02-20 03:07:12.935267 | orchestrator |  "changed": false, 2026-02-20 03:07:12.935278 | orchestrator |  "msg": "All assertions passed" 2026-02-20 03:07:12.935289 | orchestrator | } 2026-02-20 03:07:12.935300 | orchestrator | ok: [testbed-node-3] => { 2026-02-20 03:07:12.935311 | orchestrator |  "changed": false, 2026-02-20 03:07:12.935324 | orchestrator |  "msg": "All assertions passed" 2026-02-20 03:07:12.935336 | orchestrator | } 2026-02-20 03:07:12.935347 | orchestrator | ok: [testbed-node-4] => { 2026-02-20 03:07:12.935356 | orchestrator |  "changed": false, 2026-02-20 03:07:12.935366 | orchestrator |  "msg": "All assertions passed" 2026-02-20 03:07:12.935376 | orchestrator | } 2026-02-20 03:07:12.935385 | orchestrator | ok: [testbed-node-5] => { 2026-02-20 03:07:12.935403 | orchestrator |  "changed": false, 2026-02-20 03:07:12.935413 | orchestrator |  "msg": "All assertions passed" 2026-02-20 03:07:12.935422 | orchestrator | } 2026-02-20 03:07:12.935432 | orchestrator | 2026-02-20 03:07:12.935442 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-20 03:07:12.935451 | orchestrator | Friday 20 February 2026 03:06:36 +0000 (0:00:00.651) 0:00:04.831 ******* 2026-02-20 03:07:12.935461 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:12.935471 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:12.935480 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:12.935490 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:12.935500 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:12.935509 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:12.935549 | orchestrator | 2026-02-20 03:07:12.935560 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-20 03:07:12.935569 | orchestrator | Friday 20 February 2026 03:06:36 +0000 (0:00:00.515) 0:00:05.347 ******* 2026-02-20 03:07:12.935579 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-20 03:07:12.935589 | orchestrator | 2026-02-20 03:07:12.935599 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-20 03:07:12.935608 | orchestrator | Friday 20 February 2026 03:06:40 +0000 (0:00:03.921) 0:00:09.268 ******* 2026-02-20 03:07:12.935619 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-20 03:07:12.935630 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-20 03:07:12.935639 | orchestrator | 2026-02-20 03:07:12.935666 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-20 03:07:12.935676 | orchestrator | Friday 20 February 2026 03:06:46 +0000 (0:00:06.266) 0:00:15.534 ******* 2026-02-20 03:07:12.935686 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-20 03:07:12.935696 | orchestrator | 2026-02-20 03:07:12.935705 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-20 03:07:12.935715 | orchestrator | Friday 20 February 2026 03:06:49 +0000 (0:00:03.019) 0:00:18.554 ******* 2026-02-20 03:07:12.935725 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-20 03:07:12.935735 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-20 03:07:12.935744 | orchestrator | 2026-02-20 03:07:12.935838 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-20 03:07:12.935852 | orchestrator | Friday 20 February 2026 03:06:53 +0000 (0:00:04.041) 0:00:22.596 ******* 2026-02-20 03:07:12.935862 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-20 03:07:12.935872 | orchestrator | 2026-02-20 03:07:12.935881 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-20 03:07:12.935898 | orchestrator | Friday 20 February 2026 03:06:57 +0000 (0:00:03.120) 0:00:25.716 ******* 2026-02-20 03:07:12.935907 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-20 03:07:12.935917 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-20 03:07:12.935926 | orchestrator | 2026-02-20 03:07:12.935936 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-20 03:07:12.935945 | orchestrator | Friday 20 February 2026 03:07:04 +0000 (0:00:07.411) 0:00:33.128 ******* 2026-02-20 03:07:12.935955 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:12.935965 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:12.935974 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:12.935984 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:12.935993 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:12.936003 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:12.936013 | orchestrator | 2026-02-20 03:07:12.936022 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-20 03:07:12.936032 | orchestrator | Friday 20 February 2026 03:07:05 +0000 (0:00:00.747) 0:00:33.875 ******* 2026-02-20 03:07:12.936048 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:12.936058 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:12.936067 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:12.936077 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:12.936086 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:12.936096 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:12.936105 | orchestrator | 2026-02-20 03:07:12.936115 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-20 03:07:12.936125 | orchestrator | Friday 20 February 2026 03:07:07 +0000 (0:00:01.988) 0:00:35.864 ******* 2026-02-20 03:07:12.936134 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:07:12.936144 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:07:12.936154 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:07:12.936164 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:07:12.936173 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:07:12.936183 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:07:12.936192 | orchestrator | 2026-02-20 03:07:12.936202 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-20 03:07:12.936212 | orchestrator | Friday 20 February 2026 03:07:08 +0000 (0:00:01.110) 0:00:36.975 ******* 2026-02-20 03:07:12.936222 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:12.936231 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:12.936241 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:12.936250 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:12.936260 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:12.936270 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:12.936279 | orchestrator | 2026-02-20 03:07:12.936289 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-20 03:07:12.936299 | orchestrator | Friday 20 February 2026 03:07:10 +0000 (0:00:02.087) 0:00:39.062 ******* 2026-02-20 03:07:12.936312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:07:12.936337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:07:17.492627 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-20 03:07:17.492766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:07:17.492785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-20 03:07:17.492798 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-20 03:07:17.492810 | orchestrator | 2026-02-20 03:07:17.492823 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-20 03:07:17.492836 | orchestrator | Friday 20 February 2026 03:07:12 +0000 (0:00:02.497) 0:00:41.559 ******* 2026-02-20 03:07:17.492848 | orchestrator | [WARNING]: Skipped 2026-02-20 03:07:17.492860 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-20 03:07:17.492872 | orchestrator | due to this access issue: 2026-02-20 03:07:17.492883 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-20 03:07:17.492894 | orchestrator | a directory 2026-02-20 03:07:17.492906 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 03:07:17.492917 | orchestrator | 2026-02-20 03:07:17.492928 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-20 03:07:17.492939 | orchestrator | Friday 20 February 2026 03:07:13 +0000 (0:00:00.671) 0:00:42.231 ******* 2026-02-20 03:07:17.492959 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 03:07:17.492971 | orchestrator | 2026-02-20 03:07:17.492982 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-20 03:07:17.493012 | orchestrator | Friday 20 February 2026 03:07:14 +0000 (0:00:01.043) 0:00:43.275 ******* 2026-02-20 03:07:17.493033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:07:17.493049 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-20 03:07:17.493063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:07:17.493078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:07:17.493100 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-20 03:07:21.446976 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-20 03:07:21.447095 | orchestrator | 2026-02-20 03:07:21.447113 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-20 03:07:21.447126 | orchestrator | Friday 20 February 2026 03:07:17 +0000 (0:00:02.840) 0:00:46.115 ******* 2026-02-20 03:07:21.447141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:21.447154 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:21.447166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:21.447178 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:21.447189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:21.447225 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:21.447285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:21.447310 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:21.447331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:21.447343 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:21.447354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:21.447365 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:21.447376 | orchestrator | 2026-02-20 03:07:21.447388 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-20 03:07:21.447399 | orchestrator | Friday 20 February 2026 03:07:19 +0000 (0:00:01.711) 0:00:47.827 ******* 2026-02-20 03:07:21.447411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:21.447431 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:21.447451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:25.729179 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:25.729300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:25.729319 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:25.729332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:25.729344 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:25.729356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:25.729390 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:25.729402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:25.729413 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:25.729424 | orchestrator | 2026-02-20 03:07:25.729436 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-20 03:07:25.729448 | orchestrator | Friday 20 February 2026 03:07:21 +0000 (0:00:02.243) 0:00:50.070 ******* 2026-02-20 03:07:25.729459 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:25.729470 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:25.729480 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:25.729491 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:25.729501 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:25.729512 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:25.729523 | orchestrator | 2026-02-20 03:07:25.729578 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-20 03:07:25.729589 | orchestrator | Friday 20 February 2026 03:07:23 +0000 (0:00:01.910) 0:00:51.980 ******* 2026-02-20 03:07:25.729600 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:25.729611 | orchestrator | 2026-02-20 03:07:25.729621 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-20 03:07:25.729648 | orchestrator | Friday 20 February 2026 03:07:23 +0000 (0:00:00.120) 0:00:52.100 ******* 2026-02-20 03:07:25.729659 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:25.729670 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:25.729681 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:25.729697 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:25.729708 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:25.729719 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:25.729729 | orchestrator | 2026-02-20 03:07:25.729740 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-20 03:07:25.729751 | orchestrator | Friday 20 February 2026 03:07:23 +0000 (0:00:00.506) 0:00:52.607 ******* 2026-02-20 03:07:25.729763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:25.729775 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:25.729786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:25.729806 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:25.729817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:25.729828 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:25.729839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:25.729851 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:25.729874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:32.558790 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:32.558907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:32.558949 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:32.558962 | orchestrator | 2026-02-20 03:07:32.558974 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-20 03:07:32.558986 | orchestrator | Friday 20 February 2026 03:07:25 +0000 (0:00:01.741) 0:00:54.348 ******* 2026-02-20 03:07:32.558999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:07:32.559013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:07:32.559039 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-20 03:07:32.559071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:07:32.559083 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-20 03:07:32.559140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-20 03:07:32.559153 | orchestrator | 2026-02-20 03:07:32.559165 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-20 03:07:32.559176 | orchestrator | Friday 20 February 2026 03:07:28 +0000 (0:00:02.600) 0:00:56.949 ******* 2026-02-20 03:07:32.559188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:07:32.559205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:07:32.559227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:07:36.585942 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-20 03:07:36.586110 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-20 03:07:36.586130 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-20 03:07:36.586144 | orchestrator | 2026-02-20 03:07:36.586158 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-20 03:07:36.586170 | orchestrator | Friday 20 February 2026 03:07:32 +0000 (0:00:04.236) 0:01:01.185 ******* 2026-02-20 03:07:36.586200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:36.586214 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:36.586270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:36.586286 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:36.586306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:36.586324 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:36.586344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:36.586372 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:36.586391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:36.586410 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:36.586438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:36.586470 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:36.586512 | orchestrator | 2026-02-20 03:07:36.586571 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-20 03:07:36.586594 | orchestrator | Friday 20 February 2026 03:07:34 +0000 (0:00:01.718) 0:01:02.904 ******* 2026-02-20 03:07:36.586614 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:36.586634 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:36.586653 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:36.586672 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:07:36.586692 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:07:36.586711 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:07:36.586731 | orchestrator | 2026-02-20 03:07:36.586750 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-20 03:07:36.586784 | orchestrator | Friday 20 February 2026 03:07:36 +0000 (0:00:02.305) 0:01:05.209 ******* 2026-02-20 03:07:52.845462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:52.845583 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:52.845595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:52.845602 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:52.845608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:52.845614 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:52.845634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:07:52.845671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:07:52.845678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:07:52.845684 | orchestrator | 2026-02-20 03:07:52.845690 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-20 03:07:52.845696 | orchestrator | Friday 20 February 2026 03:07:39 +0000 (0:00:02.804) 0:01:08.013 ******* 2026-02-20 03:07:52.845702 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:52.845707 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:52.845713 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:52.845718 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:52.845723 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:52.845729 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:52.845734 | orchestrator | 2026-02-20 03:07:52.845740 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-20 03:07:52.845745 | orchestrator | Friday 20 February 2026 03:07:41 +0000 (0:00:02.218) 0:01:10.232 ******* 2026-02-20 03:07:52.845751 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:52.845756 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:52.845761 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:52.845767 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:52.845772 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:52.845777 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:52.845783 | orchestrator | 2026-02-20 03:07:52.845789 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-20 03:07:52.845800 | orchestrator | Friday 20 February 2026 03:07:43 +0000 (0:00:01.981) 0:01:12.213 ******* 2026-02-20 03:07:52.845805 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:52.845811 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:52.845816 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:52.845822 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:52.845827 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:52.845832 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:52.845838 | orchestrator | 2026-02-20 03:07:52.845843 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-20 03:07:52.845849 | orchestrator | Friday 20 February 2026 03:07:45 +0000 (0:00:02.087) 0:01:14.300 ******* 2026-02-20 03:07:52.845854 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:52.845859 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:52.845865 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:52.845870 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:52.845875 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:52.845881 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:52.845886 | orchestrator | 2026-02-20 03:07:52.845892 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-20 03:07:52.845897 | orchestrator | Friday 20 February 2026 03:07:47 +0000 (0:00:01.942) 0:01:16.243 ******* 2026-02-20 03:07:52.845906 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:52.845911 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:52.845917 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:52.845922 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:52.845928 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:52.845933 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:52.845949 | orchestrator | 2026-02-20 03:07:52.845954 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-20 03:07:52.845966 | orchestrator | Friday 20 February 2026 03:07:49 +0000 (0:00:01.761) 0:01:18.005 ******* 2026-02-20 03:07:52.845972 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:52.845977 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:52.845983 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:52.845988 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:52.845994 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:52.845999 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:52.846004 | orchestrator | 2026-02-20 03:07:52.846010 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-20 03:07:52.846053 | orchestrator | Friday 20 February 2026 03:07:51 +0000 (0:00:01.684) 0:01:19.690 ******* 2026-02-20 03:07:52.846060 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-20 03:07:52.846067 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:52.846074 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-20 03:07:52.846080 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:52.846087 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-20 03:07:52.846093 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:52.846100 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-20 03:07:52.846107 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:52.846117 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-20 03:07:56.211072 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:56.211172 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-20 03:07:56.211187 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:56.211199 | orchestrator | 2026-02-20 03:07:56.211212 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-20 03:07:56.211248 | orchestrator | Friday 20 February 2026 03:07:52 +0000 (0:00:01.772) 0:01:21.462 ******* 2026-02-20 03:07:56.211263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:56.211279 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:56.211290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:56.211302 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:56.211327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:56.211339 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:07:56.211351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:56.211363 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:07:56.211392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:56.211413 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:07:56.211424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:07:56.211436 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:07:56.211447 | orchestrator | 2026-02-20 03:07:56.211458 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-20 03:07:56.211469 | orchestrator | Friday 20 February 2026 03:07:54 +0000 (0:00:01.686) 0:01:23.149 ******* 2026-02-20 03:07:56.211481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:56.211492 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:07:56.211508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:07:56.211520 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:07:56.211539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:08:19.339319 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:08:19.339489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:08:19.339523 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:08:19.339544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:08:19.339565 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:08:19.339665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:08:19.339689 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:08:19.339709 | orchestrator | 2026-02-20 03:08:19.339729 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-20 03:08:19.339750 | orchestrator | Friday 20 February 2026 03:07:56 +0000 (0:00:01.687) 0:01:24.837 ******* 2026-02-20 03:08:19.339769 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:08:19.339787 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:08:19.339805 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:08:19.339822 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:08:19.339838 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:08:19.339856 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:08:19.339874 | orchestrator | 2026-02-20 03:08:19.339892 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-20 03:08:19.339942 | orchestrator | Friday 20 February 2026 03:07:57 +0000 (0:00:01.664) 0:01:26.501 ******* 2026-02-20 03:08:19.339962 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:08:19.339980 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:08:19.339997 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:08:19.340014 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:08:19.340033 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:08:19.340052 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:08:19.340071 | orchestrator | 2026-02-20 03:08:19.340091 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-20 03:08:19.340111 | orchestrator | Friday 20 February 2026 03:08:00 +0000 (0:00:03.073) 0:01:29.574 ******* 2026-02-20 03:08:19.340130 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:08:19.340149 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:08:19.340161 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:08:19.340171 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:08:19.340184 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:08:19.340203 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:08:19.340220 | orchestrator | 2026-02-20 03:08:19.340238 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-20 03:08:19.340257 | orchestrator | Friday 20 February 2026 03:08:02 +0000 (0:00:01.984) 0:01:31.559 ******* 2026-02-20 03:08:19.340274 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:08:19.340292 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:08:19.340310 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:08:19.340327 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:08:19.340343 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:08:19.340360 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:08:19.340376 | orchestrator | 2026-02-20 03:08:19.340392 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-20 03:08:19.340437 | orchestrator | Friday 20 February 2026 03:08:04 +0000 (0:00:02.000) 0:01:33.560 ******* 2026-02-20 03:08:19.340456 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:08:19.340473 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:08:19.340490 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:08:19.340507 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:08:19.340524 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:08:19.340542 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:08:19.340559 | orchestrator | 2026-02-20 03:08:19.340629 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-20 03:08:19.340648 | orchestrator | Friday 20 February 2026 03:08:07 +0000 (0:00:02.097) 0:01:35.657 ******* 2026-02-20 03:08:19.340666 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:08:19.340684 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:08:19.340702 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:08:19.340721 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:08:19.340740 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:08:19.340758 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:08:19.340778 | orchestrator | 2026-02-20 03:08:19.340796 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-20 03:08:19.340814 | orchestrator | Friday 20 February 2026 03:08:09 +0000 (0:00:02.138) 0:01:37.796 ******* 2026-02-20 03:08:19.340831 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:08:19.340849 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:08:19.340867 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:08:19.340885 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:08:19.340904 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:08:19.340921 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:08:19.340939 | orchestrator | 2026-02-20 03:08:19.340954 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-20 03:08:19.340970 | orchestrator | Friday 20 February 2026 03:08:11 +0000 (0:00:02.143) 0:01:39.940 ******* 2026-02-20 03:08:19.340985 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:08:19.341026 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:08:19.341042 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:08:19.341058 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:08:19.341072 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:08:19.341089 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:08:19.341104 | orchestrator | 2026-02-20 03:08:19.341120 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-20 03:08:19.341137 | orchestrator | Friday 20 February 2026 03:08:13 +0000 (0:00:02.163) 0:01:42.103 ******* 2026-02-20 03:08:19.341153 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:08:19.341169 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:08:19.341184 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:08:19.341199 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:08:19.341214 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:08:19.341232 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:08:19.341248 | orchestrator | 2026-02-20 03:08:19.341264 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-20 03:08:19.341282 | orchestrator | Friday 20 February 2026 03:08:15 +0000 (0:00:02.169) 0:01:44.273 ******* 2026-02-20 03:08:19.341299 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-20 03:08:19.341317 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:08:19.341333 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-20 03:08:19.341350 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:08:19.341379 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-20 03:08:19.341396 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:08:19.341412 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-20 03:08:19.341429 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:08:19.341440 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-20 03:08:19.341450 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:08:19.341459 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-20 03:08:19.341469 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:08:19.341478 | orchestrator | 2026-02-20 03:08:19.341487 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-20 03:08:19.341497 | orchestrator | Friday 20 February 2026 03:08:17 +0000 (0:00:01.703) 0:01:45.976 ******* 2026-02-20 03:08:19.341509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:08:19.341522 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:08:19.341547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:08:21.563034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-20 03:08:21.563132 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:08:21.563147 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:08:21.563192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:08:21.563204 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:08:21.563215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:08:21.563235 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:08:21.563245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 03:08:21.563276 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:08:21.563287 | orchestrator | 2026-02-20 03:08:21.563298 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-20 03:08:21.563309 | orchestrator | Friday 20 February 2026 03:08:19 +0000 (0:00:01.985) 0:01:47.962 ******* 2026-02-20 03:08:21.563336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:08:21.563349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:08:21.563365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-20 03:08:21.563376 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-20 03:08:21.563387 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-20 03:08:21.563412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-20 03:10:40.607540 | orchestrator | 2026-02-20 03:10:40.607662 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-20 03:10:40.607679 | orchestrator | Friday 20 February 2026 03:08:21 +0000 (0:00:02.220) 0:01:50.183 ******* 2026-02-20 03:10:40.607691 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:10:40.607773 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:10:40.607794 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:10:40.607814 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:10:40.607833 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:10:40.607853 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:10:40.607873 | orchestrator | 2026-02-20 03:10:40.607891 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-20 03:10:40.607909 | orchestrator | Friday 20 February 2026 03:08:22 +0000 (0:00:00.621) 0:01:50.805 ******* 2026-02-20 03:10:40.607928 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:10:40.607947 | orchestrator | 2026-02-20 03:10:40.607967 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-20 03:10:40.607987 | orchestrator | Friday 20 February 2026 03:08:24 +0000 (0:00:02.016) 0:01:52.821 ******* 2026-02-20 03:10:40.608007 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:10:40.608027 | orchestrator | 2026-02-20 03:10:40.608039 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-20 03:10:40.608051 | orchestrator | Friday 20 February 2026 03:08:26 +0000 (0:00:02.187) 0:01:55.009 ******* 2026-02-20 03:10:40.608062 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:10:40.608073 | orchestrator | 2026-02-20 03:10:40.608087 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-20 03:10:40.608117 | orchestrator | Friday 20 February 2026 03:09:10 +0000 (0:00:44.201) 0:02:39.211 ******* 2026-02-20 03:10:40.608130 | orchestrator | 2026-02-20 03:10:40.608142 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-20 03:10:40.608155 | orchestrator | Friday 20 February 2026 03:09:10 +0000 (0:00:00.069) 0:02:39.280 ******* 2026-02-20 03:10:40.608167 | orchestrator | 2026-02-20 03:10:40.608180 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-20 03:10:40.608193 | orchestrator | Friday 20 February 2026 03:09:10 +0000 (0:00:00.068) 0:02:39.348 ******* 2026-02-20 03:10:40.608205 | orchestrator | 2026-02-20 03:10:40.608217 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-20 03:10:40.608230 | orchestrator | Friday 20 February 2026 03:09:10 +0000 (0:00:00.070) 0:02:39.419 ******* 2026-02-20 03:10:40.608241 | orchestrator | 2026-02-20 03:10:40.608277 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-20 03:10:40.608290 | orchestrator | Friday 20 February 2026 03:09:10 +0000 (0:00:00.070) 0:02:39.489 ******* 2026-02-20 03:10:40.608302 | orchestrator | 2026-02-20 03:10:40.608314 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-20 03:10:40.608327 | orchestrator | Friday 20 February 2026 03:09:10 +0000 (0:00:00.065) 0:02:39.555 ******* 2026-02-20 03:10:40.608339 | orchestrator | 2026-02-20 03:10:40.608351 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-20 03:10:40.608364 | orchestrator | Friday 20 February 2026 03:09:10 +0000 (0:00:00.069) 0:02:39.624 ******* 2026-02-20 03:10:40.608376 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:10:40.608389 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:10:40.608401 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:10:40.608413 | orchestrator | 2026-02-20 03:10:40.608426 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-20 03:10:40.608438 | orchestrator | Friday 20 February 2026 03:09:38 +0000 (0:00:27.582) 0:03:07.207 ******* 2026-02-20 03:10:40.608450 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:10:40.608463 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:10:40.608475 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:10:40.608486 | orchestrator | 2026-02-20 03:10:40.608497 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:10:40.608509 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-20 03:10:40.608522 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-20 03:10:40.608533 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-20 03:10:40.608544 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-20 03:10:40.608555 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-20 03:10:40.608566 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-20 03:10:40.608576 | orchestrator | 2026-02-20 03:10:40.608587 | orchestrator | 2026-02-20 03:10:40.608598 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:10:40.608609 | orchestrator | Friday 20 February 2026 03:10:40 +0000 (0:01:01.647) 0:04:08.855 ******* 2026-02-20 03:10:40.608620 | orchestrator | =============================================================================== 2026-02-20 03:10:40.608631 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 61.65s 2026-02-20 03:10:40.608641 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 44.20s 2026-02-20 03:10:40.608652 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.58s 2026-02-20 03:10:40.608682 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.41s 2026-02-20 03:10:40.608753 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.27s 2026-02-20 03:10:40.608767 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 4.24s 2026-02-20 03:10:40.608777 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.04s 2026-02-20 03:10:40.608788 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.92s 2026-02-20 03:10:40.608798 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.12s 2026-02-20 03:10:40.608809 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.07s 2026-02-20 03:10:40.608829 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.02s 2026-02-20 03:10:40.608840 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 2.84s 2026-02-20 03:10:40.608851 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 2.80s 2026-02-20 03:10:40.608861 | orchestrator | neutron : Copying over config.json files for services ------------------- 2.60s 2026-02-20 03:10:40.608872 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.50s 2026-02-20 03:10:40.608882 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.31s 2026-02-20 03:10:40.608893 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.24s 2026-02-20 03:10:40.608910 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.22s 2026-02-20 03:10:40.608921 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 2.22s 2026-02-20 03:10:40.608932 | orchestrator | neutron : Creating Neutron database user and setting permissions -------- 2.19s 2026-02-20 03:10:42.844643 | orchestrator | 2026-02-20 03:10:42 | INFO  | Task ddb2c374-07c4-491c-a95f-2bd363175fd1 (nova) was prepared for execution. 2026-02-20 03:10:42.844828 | orchestrator | 2026-02-20 03:10:42 | INFO  | It takes a moment until task ddb2c374-07c4-491c-a95f-2bd363175fd1 (nova) has been started and output is visible here. 2026-02-20 03:12:39.333350 | orchestrator | 2026-02-20 03:12:39.333462 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:12:39.333477 | orchestrator | 2026-02-20 03:12:39.333487 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-20 03:12:39.333497 | orchestrator | Friday 20 February 2026 03:10:46 +0000 (0:00:00.271) 0:00:00.271 ******* 2026-02-20 03:12:39.333507 | orchestrator | changed: [testbed-manager] 2026-02-20 03:12:39.333518 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:12:39.333528 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:12:39.333537 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:12:39.333547 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:12:39.333556 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:12:39.333566 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:12:39.333575 | orchestrator | 2026-02-20 03:12:39.333585 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:12:39.333595 | orchestrator | Friday 20 February 2026 03:10:47 +0000 (0:00:00.772) 0:00:01.043 ******* 2026-02-20 03:12:39.333604 | orchestrator | changed: [testbed-manager] 2026-02-20 03:12:39.333614 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:12:39.333624 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:12:39.333633 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:12:39.333643 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:12:39.333652 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:12:39.333662 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:12:39.333671 | orchestrator | 2026-02-20 03:12:39.333681 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:12:39.333691 | orchestrator | Friday 20 February 2026 03:10:48 +0000 (0:00:00.801) 0:00:01.845 ******* 2026-02-20 03:12:39.333701 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-20 03:12:39.333711 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-20 03:12:39.333720 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-20 03:12:39.333730 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-20 03:12:39.333739 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-20 03:12:39.333819 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-20 03:12:39.333832 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-20 03:12:39.333870 | orchestrator | 2026-02-20 03:12:39.333880 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-20 03:12:39.333911 | orchestrator | 2026-02-20 03:12:39.333921 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-20 03:12:39.333931 | orchestrator | Friday 20 February 2026 03:10:49 +0000 (0:00:00.670) 0:00:02.516 ******* 2026-02-20 03:12:39.333941 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:12:39.333950 | orchestrator | 2026-02-20 03:12:39.333959 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-20 03:12:39.333969 | orchestrator | Friday 20 February 2026 03:10:49 +0000 (0:00:00.677) 0:00:03.193 ******* 2026-02-20 03:12:39.333979 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-20 03:12:39.333989 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-20 03:12:39.333998 | orchestrator | 2026-02-20 03:12:39.334008 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-20 03:12:39.334072 | orchestrator | Friday 20 February 2026 03:10:53 +0000 (0:00:04.019) 0:00:07.212 ******* 2026-02-20 03:12:39.334083 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-20 03:12:39.334093 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-20 03:12:39.334103 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:12:39.334113 | orchestrator | 2026-02-20 03:12:39.334123 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-20 03:12:39.334132 | orchestrator | Friday 20 February 2026 03:10:58 +0000 (0:00:04.141) 0:00:11.354 ******* 2026-02-20 03:12:39.334142 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:12:39.334151 | orchestrator | 2026-02-20 03:12:39.334161 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-20 03:12:39.334171 | orchestrator | Friday 20 February 2026 03:10:58 +0000 (0:00:00.622) 0:00:11.976 ******* 2026-02-20 03:12:39.334180 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:12:39.334190 | orchestrator | 2026-02-20 03:12:39.334199 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-20 03:12:39.334209 | orchestrator | Friday 20 February 2026 03:10:59 +0000 (0:00:01.223) 0:00:13.200 ******* 2026-02-20 03:12:39.334218 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:12:39.334228 | orchestrator | 2026-02-20 03:12:39.334237 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-20 03:12:39.334247 | orchestrator | Friday 20 February 2026 03:11:02 +0000 (0:00:02.489) 0:00:15.689 ******* 2026-02-20 03:12:39.334256 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:12:39.334265 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:12:39.334275 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:12:39.334284 | orchestrator | 2026-02-20 03:12:39.334294 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-20 03:12:39.334303 | orchestrator | Friday 20 February 2026 03:11:02 +0000 (0:00:00.281) 0:00:15.971 ******* 2026-02-20 03:12:39.334313 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:12:39.334323 | orchestrator | 2026-02-20 03:12:39.334332 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-20 03:12:39.334356 | orchestrator | Friday 20 February 2026 03:11:36 +0000 (0:00:33.650) 0:00:49.621 ******* 2026-02-20 03:12:39.334366 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:12:39.334389 | orchestrator | 2026-02-20 03:12:39.334398 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-20 03:12:39.334418 | orchestrator | Friday 20 February 2026 03:11:50 +0000 (0:00:14.126) 0:01:03.748 ******* 2026-02-20 03:12:39.334428 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:12:39.334438 | orchestrator | 2026-02-20 03:12:39.334447 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-20 03:12:39.334457 | orchestrator | Friday 20 February 2026 03:12:01 +0000 (0:00:11.319) 0:01:15.067 ******* 2026-02-20 03:12:39.334484 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:12:39.334495 | orchestrator | 2026-02-20 03:12:39.334505 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-20 03:12:39.334514 | orchestrator | Friday 20 February 2026 03:12:02 +0000 (0:00:00.662) 0:01:15.730 ******* 2026-02-20 03:12:39.334537 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:12:39.334553 | orchestrator | 2026-02-20 03:12:39.334567 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-20 03:12:39.334582 | orchestrator | Friday 20 February 2026 03:12:02 +0000 (0:00:00.446) 0:01:16.176 ******* 2026-02-20 03:12:39.334597 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:12:39.334611 | orchestrator | 2026-02-20 03:12:39.334626 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-20 03:12:39.334641 | orchestrator | Friday 20 February 2026 03:12:03 +0000 (0:00:00.642) 0:01:16.819 ******* 2026-02-20 03:12:39.334656 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:12:39.334671 | orchestrator | 2026-02-20 03:12:39.334686 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-20 03:12:39.334702 | orchestrator | Friday 20 February 2026 03:12:20 +0000 (0:00:17.502) 0:01:34.321 ******* 2026-02-20 03:12:39.334717 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:12:39.334732 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:12:39.334749 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:12:39.334766 | orchestrator | 2026-02-20 03:12:39.334782 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-20 03:12:39.334799 | orchestrator | 2026-02-20 03:12:39.334809 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-20 03:12:39.334818 | orchestrator | Friday 20 February 2026 03:12:21 +0000 (0:00:00.306) 0:01:34.628 ******* 2026-02-20 03:12:39.334828 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:12:39.334887 | orchestrator | 2026-02-20 03:12:39.334898 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-20 03:12:39.334918 | orchestrator | Friday 20 February 2026 03:12:22 +0000 (0:00:00.695) 0:01:35.324 ******* 2026-02-20 03:12:39.334928 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:12:39.334938 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:12:39.334947 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:12:39.334957 | orchestrator | 2026-02-20 03:12:39.334966 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-20 03:12:39.334976 | orchestrator | Friday 20 February 2026 03:12:23 +0000 (0:00:01.909) 0:01:37.233 ******* 2026-02-20 03:12:39.334985 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:12:39.334995 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:12:39.335004 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:12:39.335013 | orchestrator | 2026-02-20 03:12:39.335023 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-20 03:12:39.335033 | orchestrator | Friday 20 February 2026 03:12:26 +0000 (0:00:02.263) 0:01:39.496 ******* 2026-02-20 03:12:39.335042 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:12:39.335052 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:12:39.335061 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:12:39.335070 | orchestrator | 2026-02-20 03:12:39.335080 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-20 03:12:39.335090 | orchestrator | Friday 20 February 2026 03:12:26 +0000 (0:00:00.465) 0:01:39.962 ******* 2026-02-20 03:12:39.335099 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-20 03:12:39.335109 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:12:39.335118 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-20 03:12:39.335128 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:12:39.335137 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-20 03:12:39.335147 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-20 03:12:39.335157 | orchestrator | 2026-02-20 03:12:39.335167 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-20 03:12:39.335176 | orchestrator | Friday 20 February 2026 03:12:34 +0000 (0:00:07.565) 0:01:47.528 ******* 2026-02-20 03:12:39.335194 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:12:39.335204 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:12:39.335213 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:12:39.335223 | orchestrator | 2026-02-20 03:12:39.335232 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-20 03:12:39.335242 | orchestrator | Friday 20 February 2026 03:12:34 +0000 (0:00:00.316) 0:01:47.844 ******* 2026-02-20 03:12:39.335252 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-20 03:12:39.335261 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:12:39.335270 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-20 03:12:39.335280 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:12:39.335289 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-20 03:12:39.335299 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:12:39.335308 | orchestrator | 2026-02-20 03:12:39.335318 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-20 03:12:39.335328 | orchestrator | Friday 20 February 2026 03:12:35 +0000 (0:00:00.993) 0:01:48.838 ******* 2026-02-20 03:12:39.335337 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:12:39.335347 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:12:39.335356 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:12:39.335366 | orchestrator | 2026-02-20 03:12:39.335375 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-20 03:12:39.335385 | orchestrator | Friday 20 February 2026 03:12:35 +0000 (0:00:00.470) 0:01:49.308 ******* 2026-02-20 03:12:39.335395 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:12:39.335404 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:12:39.335414 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:12:39.335423 | orchestrator | 2026-02-20 03:12:39.335433 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-20 03:12:39.335443 | orchestrator | Friday 20 February 2026 03:12:37 +0000 (0:00:01.038) 0:01:50.346 ******* 2026-02-20 03:12:39.335452 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:12:39.335462 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:12:39.335480 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:13:55.555291 | orchestrator | 2026-02-20 03:13:55.555409 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-20 03:13:55.555426 | orchestrator | Friday 20 February 2026 03:12:39 +0000 (0:00:02.292) 0:01:52.638 ******* 2026-02-20 03:13:55.555438 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:13:55.555449 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:13:55.555460 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:13:55.555472 | orchestrator | 2026-02-20 03:13:55.555484 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-20 03:13:55.555495 | orchestrator | Friday 20 February 2026 03:13:01 +0000 (0:00:22.119) 0:02:14.757 ******* 2026-02-20 03:13:55.555506 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:13:55.555516 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:13:55.555527 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:13:55.555543 | orchestrator | 2026-02-20 03:13:55.555563 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-20 03:13:55.555594 | orchestrator | Friday 20 February 2026 03:13:12 +0000 (0:00:11.398) 0:02:26.156 ******* 2026-02-20 03:13:55.555614 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:13:55.555633 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:13:55.555654 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:13:55.555674 | orchestrator | 2026-02-20 03:13:55.555693 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-20 03:13:55.555713 | orchestrator | Friday 20 February 2026 03:13:13 +0000 (0:00:00.989) 0:02:27.145 ******* 2026-02-20 03:13:55.555732 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:13:55.555751 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:13:55.555762 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:13:55.555773 | orchestrator | 2026-02-20 03:13:55.555809 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-20 03:13:55.555821 | orchestrator | Friday 20 February 2026 03:13:25 +0000 (0:00:12.110) 0:02:39.256 ******* 2026-02-20 03:13:55.555833 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:13:55.555846 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:13:55.555858 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:13:55.555870 | orchestrator | 2026-02-20 03:13:55.555882 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-20 03:13:55.555894 | orchestrator | Friday 20 February 2026 03:13:26 +0000 (0:00:01.007) 0:02:40.263 ******* 2026-02-20 03:13:55.555933 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:13:55.555946 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:13:55.555958 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:13:55.555970 | orchestrator | 2026-02-20 03:13:55.555982 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-20 03:13:55.555994 | orchestrator | 2026-02-20 03:13:55.556007 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-20 03:13:55.556019 | orchestrator | Friday 20 February 2026 03:13:27 +0000 (0:00:00.304) 0:02:40.568 ******* 2026-02-20 03:13:55.556032 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:13:55.556046 | orchestrator | 2026-02-20 03:13:55.556059 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-20 03:13:55.556071 | orchestrator | Friday 20 February 2026 03:13:27 +0000 (0:00:00.674) 0:02:41.243 ******* 2026-02-20 03:13:55.556084 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-20 03:13:55.556096 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-20 03:13:55.556109 | orchestrator | 2026-02-20 03:13:55.556122 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-20 03:13:55.556134 | orchestrator | Friday 20 February 2026 03:13:31 +0000 (0:00:03.211) 0:02:44.454 ******* 2026-02-20 03:13:55.556148 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-20 03:13:55.556162 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-20 03:13:55.556225 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-20 03:13:55.556238 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-20 03:13:55.556249 | orchestrator | 2026-02-20 03:13:55.556260 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-20 03:13:55.556271 | orchestrator | Friday 20 February 2026 03:13:37 +0000 (0:00:06.173) 0:02:50.627 ******* 2026-02-20 03:13:55.556281 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-20 03:13:55.556292 | orchestrator | 2026-02-20 03:13:55.556303 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-20 03:13:55.556314 | orchestrator | Friday 20 February 2026 03:13:40 +0000 (0:00:03.083) 0:02:53.711 ******* 2026-02-20 03:13:55.556325 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-20 03:13:55.556336 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-20 03:13:55.556347 | orchestrator | 2026-02-20 03:13:55.556359 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-20 03:13:55.556374 | orchestrator | Friday 20 February 2026 03:13:44 +0000 (0:00:03.654) 0:02:57.365 ******* 2026-02-20 03:13:55.556386 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-20 03:13:55.556397 | orchestrator | 2026-02-20 03:13:55.556408 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-20 03:13:55.556419 | orchestrator | Friday 20 February 2026 03:13:47 +0000 (0:00:03.109) 0:03:00.475 ******* 2026-02-20 03:13:55.556430 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-20 03:13:55.556454 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-20 03:13:55.556465 | orchestrator | 2026-02-20 03:13:55.556476 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-20 03:13:55.556509 | orchestrator | Friday 20 February 2026 03:13:54 +0000 (0:00:07.068) 0:03:07.543 ******* 2026-02-20 03:13:55.556572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 03:13:55.556591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 03:13:55.556611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 03:13:55.556640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:13:59.934949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:13:59.935057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:13:59.935073 | orchestrator | 2026-02-20 03:13:59.935087 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-20 03:13:59.935100 | orchestrator | Friday 20 February 2026 03:13:55 +0000 (0:00:01.317) 0:03:08.861 ******* 2026-02-20 03:13:59.935111 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:13:59.935123 | orchestrator | 2026-02-20 03:13:59.935134 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-20 03:13:59.935145 | orchestrator | Friday 20 February 2026 03:13:55 +0000 (0:00:00.135) 0:03:08.997 ******* 2026-02-20 03:13:59.935156 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:13:59.935167 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:13:59.935177 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:13:59.935188 | orchestrator | 2026-02-20 03:13:59.935198 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-20 03:13:59.935209 | orchestrator | Friday 20 February 2026 03:13:55 +0000 (0:00:00.311) 0:03:09.308 ******* 2026-02-20 03:13:59.935220 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 03:13:59.935231 | orchestrator | 2026-02-20 03:13:59.935241 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-20 03:13:59.935252 | orchestrator | Friday 20 February 2026 03:13:56 +0000 (0:00:00.638) 0:03:09.946 ******* 2026-02-20 03:13:59.935262 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:13:59.935273 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:13:59.935284 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:13:59.935294 | orchestrator | 2026-02-20 03:13:59.935305 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-20 03:13:59.935316 | orchestrator | Friday 20 February 2026 03:13:57 +0000 (0:00:00.475) 0:03:10.421 ******* 2026-02-20 03:13:59.935327 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:13:59.935339 | orchestrator | 2026-02-20 03:13:59.935350 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-20 03:13:59.935385 | orchestrator | Friday 20 February 2026 03:13:57 +0000 (0:00:00.544) 0:03:10.966 ******* 2026-02-20 03:13:59.935416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 03:13:59.935450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 03:13:59.935466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 03:13:59.935480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:13:59.935506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:13:59.935519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:13:59.935532 | orchestrator | 2026-02-20 03:13:59.935552 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-20 03:14:01.538434 | orchestrator | Friday 20 February 2026 03:13:59 +0000 (0:00:02.275) 0:03:13.242 ******* 2026-02-20 03:14:01.538667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-20 03:14:01.538695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:14:01.538725 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:14:01.538751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-20 03:14:01.538823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:14:01.538885 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:14:01.538960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-20 03:14:01.539011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:14:01.539032 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:14:01.539051 | orchestrator | 2026-02-20 03:14:01.539070 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-20 03:14:01.539091 | orchestrator | Friday 20 February 2026 03:14:00 +0000 (0:00:00.815) 0:03:14.058 ******* 2026-02-20 03:14:01.539105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-20 03:14:01.539137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:14:01.539149 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:14:01.539172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-20 03:14:03.894040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:14:03.894113 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:14:03.894122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-20 03:14:03.894153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:14:03.894158 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:14:03.894162 | orchestrator | 2026-02-20 03:14:03.894167 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-20 03:14:03.894172 | orchestrator | Friday 20 February 2026 03:14:01 +0000 (0:00:00.789) 0:03:14.848 ******* 2026-02-20 03:14:03.894177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 03:14:03.894192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 03:14:03.894202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 03:14:03.894210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:14:03.894215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:14:03.894223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:14:10.051424 | orchestrator | 2026-02-20 03:14:10.051537 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-20 03:14:10.051555 | orchestrator | Friday 20 February 2026 03:14:03 +0000 (0:00:02.356) 0:03:17.205 ******* 2026-02-20 03:14:10.051573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 03:14:10.051628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 03:14:10.051643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 03:14:10.051675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:14:10.051698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:14:10.051710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:14:10.051722 | orchestrator | 2026-02-20 03:14:10.051733 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-20 03:14:10.051744 | orchestrator | Friday 20 February 2026 03:14:09 +0000 (0:00:05.536) 0:03:22.741 ******* 2026-02-20 03:14:10.051762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-20 03:14:10.051775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:14:10.051787 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:14:10.051810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-20 03:14:14.449518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:14:14.449614 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:14:14.449643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-20 03:14:14.449654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:14:14.449663 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:14:14.449671 | orchestrator | 2026-02-20 03:14:14.449680 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-20 03:14:14.449689 | orchestrator | Friday 20 February 2026 03:14:10 +0000 (0:00:00.621) 0:03:23.362 ******* 2026-02-20 03:14:14.449697 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:14:14.449705 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:14:14.449713 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:14:14.449720 | orchestrator | 2026-02-20 03:14:14.449728 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-20 03:14:14.449736 | orchestrator | Friday 20 February 2026 03:14:11 +0000 (0:00:01.496) 0:03:24.859 ******* 2026-02-20 03:14:14.449765 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:14:14.449774 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:14:14.449782 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:14:14.449790 | orchestrator | 2026-02-20 03:14:14.449798 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-20 03:14:14.449806 | orchestrator | Friday 20 February 2026 03:14:11 +0000 (0:00:00.342) 0:03:25.201 ******* 2026-02-20 03:14:14.449830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 03:14:14.449844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 03:14:14.449854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-20 03:14:14.449870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:14:14.449880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:14:14.449894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:14:56.212843 | orchestrator | 2026-02-20 03:14:56.212929 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-20 03:14:56.212941 | orchestrator | Friday 20 February 2026 03:14:14 +0000 (0:00:02.125) 0:03:27.327 ******* 2026-02-20 03:14:56.212983 | orchestrator | 2026-02-20 03:14:56.212994 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-20 03:14:56.213002 | orchestrator | Friday 20 February 2026 03:14:14 +0000 (0:00:00.137) 0:03:27.465 ******* 2026-02-20 03:14:56.213010 | orchestrator | 2026-02-20 03:14:56.213018 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-20 03:14:56.213026 | orchestrator | Friday 20 February 2026 03:14:14 +0000 (0:00:00.147) 0:03:27.612 ******* 2026-02-20 03:14:56.213034 | orchestrator | 2026-02-20 03:14:56.213042 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-20 03:14:56.213050 | orchestrator | Friday 20 February 2026 03:14:14 +0000 (0:00:00.142) 0:03:27.755 ******* 2026-02-20 03:14:56.213058 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:14:56.213067 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:14:56.213074 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:14:56.213082 | orchestrator | 2026-02-20 03:14:56.213090 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-20 03:14:56.213098 | orchestrator | Friday 20 February 2026 03:14:34 +0000 (0:00:20.329) 0:03:48.085 ******* 2026-02-20 03:14:56.213106 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:14:56.213114 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:14:56.213133 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:14:56.213141 | orchestrator | 2026-02-20 03:14:56.213149 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-20 03:14:56.213157 | orchestrator | 2026-02-20 03:14:56.213165 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-20 03:14:56.213173 | orchestrator | Friday 20 February 2026 03:14:45 +0000 (0:00:10.489) 0:03:58.574 ******* 2026-02-20 03:14:56.213204 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:14:56.213214 | orchestrator | 2026-02-20 03:14:56.213222 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-20 03:14:56.213230 | orchestrator | Friday 20 February 2026 03:14:46 +0000 (0:00:01.141) 0:03:59.716 ******* 2026-02-20 03:14:56.213238 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:14:56.213246 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:14:56.213254 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:14:56.213262 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:14:56.213269 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:14:56.213277 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:14:56.213285 | orchestrator | 2026-02-20 03:14:56.213293 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-20 03:14:56.213301 | orchestrator | Friday 20 February 2026 03:14:47 +0000 (0:00:00.707) 0:04:00.423 ******* 2026-02-20 03:14:56.213309 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:14:56.213317 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:14:56.213324 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:14:56.213332 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 03:14:56.213341 | orchestrator | 2026-02-20 03:14:56.213349 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-20 03:14:56.213357 | orchestrator | Friday 20 February 2026 03:14:47 +0000 (0:00:00.798) 0:04:01.222 ******* 2026-02-20 03:14:56.213365 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-20 03:14:56.213373 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-20 03:14:56.213381 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-20 03:14:56.213389 | orchestrator | 2026-02-20 03:14:56.213397 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-20 03:14:56.213405 | orchestrator | Friday 20 February 2026 03:14:48 +0000 (0:00:00.833) 0:04:02.056 ******* 2026-02-20 03:14:56.213413 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-20 03:14:56.213422 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-20 03:14:56.213431 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-20 03:14:56.213439 | orchestrator | 2026-02-20 03:14:56.213448 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-20 03:14:56.213456 | orchestrator | Friday 20 February 2026 03:14:49 +0000 (0:00:01.186) 0:04:03.242 ******* 2026-02-20 03:14:56.213466 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-20 03:14:56.213475 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:14:56.213484 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-20 03:14:56.213493 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:14:56.213502 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-20 03:14:56.213511 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:14:56.213520 | orchestrator | 2026-02-20 03:14:56.213530 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-20 03:14:56.213539 | orchestrator | Friday 20 February 2026 03:14:50 +0000 (0:00:00.531) 0:04:03.774 ******* 2026-02-20 03:14:56.213548 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-20 03:14:56.213557 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-20 03:14:56.213566 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-20 03:14:56.213575 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-20 03:14:56.213584 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:14:56.213593 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-20 03:14:56.213608 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-20 03:14:56.213618 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:14:56.213639 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-20 03:14:56.213649 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-20 03:14:56.213658 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:14:56.213667 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-20 03:14:56.213676 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-20 03:14:56.213689 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-20 03:14:56.213703 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-20 03:14:56.213719 | orchestrator | 2026-02-20 03:14:56.213738 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-20 03:14:56.213752 | orchestrator | Friday 20 February 2026 03:14:51 +0000 (0:00:01.242) 0:04:05.016 ******* 2026-02-20 03:14:56.213765 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:14:56.213779 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:14:56.213790 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:14:56.213804 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:14:56.213817 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:14:56.213831 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:14:56.213844 | orchestrator | 2026-02-20 03:14:56.213858 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-20 03:14:56.213878 | orchestrator | Friday 20 February 2026 03:14:52 +0000 (0:00:01.184) 0:04:06.201 ******* 2026-02-20 03:14:56.213892 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:14:56.213906 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:14:56.213919 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:14:56.213933 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:14:56.213947 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:14:56.213991 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:14:56.214005 | orchestrator | 2026-02-20 03:14:56.214069 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-20 03:14:56.214087 | orchestrator | Friday 20 February 2026 03:14:54 +0000 (0:00:01.634) 0:04:07.836 ******* 2026-02-20 03:14:56.214103 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-20 03:14:56.214119 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-20 03:14:56.214155 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-20 03:14:57.837696 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-20 03:14:57.837814 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-20 03:14:57.837831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-20 03:14:57.837844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-20 03:14:57.837856 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-20 03:14:57.837889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-20 03:14:57.837920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-20 03:14:57.837938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-20 03:14:57.837950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:14:57.838092 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-20 03:14:57.838104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:14:57.838129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:14:57.838141 | orchestrator | 2026-02-20 03:14:57.838154 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-20 03:14:57.838167 | orchestrator | Friday 20 February 2026 03:14:56 +0000 (0:00:02.126) 0:04:09.962 ******* 2026-02-20 03:14:57.838179 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:14:57.838191 | orchestrator | 2026-02-20 03:14:57.838203 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-20 03:14:57.838223 | orchestrator | Friday 20 February 2026 03:14:57 +0000 (0:00:01.188) 0:04:11.151 ******* 2026-02-20 03:15:01.092888 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-20 03:15:01.093027 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-20 03:15:01.093038 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-20 03:15:01.093060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-20 03:15:01.093066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-20 03:15:01.093083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-20 03:15:01.093091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-20 03:15:01.093096 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-20 03:15:01.093101 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-20 03:15:01.093109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:01.093114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:01.093123 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:02.800427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:02.800557 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:02.800577 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:02.800613 | orchestrator | 2026-02-20 03:15:02.800627 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-20 03:15:02.800640 | orchestrator | Friday 20 February 2026 03:15:01 +0000 (0:00:03.527) 0:04:14.679 ******* 2026-02-20 03:15:02.800653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-20 03:15:02.800666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-20 03:15:02.800694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-20 03:15:02.800707 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:15:02.800725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-20 03:15:02.800738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-20 03:15:02.800760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-20 03:15:02.800771 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:15:02.800782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-20 03:15:02.800802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-20 03:15:03.806457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-20 03:15:03.806556 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:15:03.806573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-20 03:15:03.806607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:15:03.806618 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:15:03.806629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-20 03:15:03.806640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:15:03.806650 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:15:03.806660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-20 03:15:03.806688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:15:03.806699 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:15:03.806709 | orchestrator | 2026-02-20 03:15:03.806720 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-20 03:15:03.806738 | orchestrator | Friday 20 February 2026 03:15:02 +0000 (0:00:01.514) 0:04:16.193 ******* 2026-02-20 03:15:03.806749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-20 03:15:03.806768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-20 03:15:03.806779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-20 03:15:03.806790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-20 03:15:03.806808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-20 03:15:08.343419 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:15:08.343560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-20 03:15:08.343644 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:15:08.343665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-20 03:15:08.343682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-20 03:15:08.343700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-20 03:15:08.343716 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:15:08.343734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-20 03:15:08.343778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:15:08.343809 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:15:08.343825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-20 03:15:08.343841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:15:08.343857 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:15:08.343874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-20 03:15:08.343890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:15:08.343906 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:15:08.343922 | orchestrator | 2026-02-20 03:15:08.343939 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-20 03:15:08.343958 | orchestrator | Friday 20 February 2026 03:15:05 +0000 (0:00:02.142) 0:04:18.335 ******* 2026-02-20 03:15:08.343998 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:15:08.344014 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:15:08.344029 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:15:08.344045 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 03:15:08.344062 | orchestrator | 2026-02-20 03:15:08.344078 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-20 03:15:08.344094 | orchestrator | Friday 20 February 2026 03:15:05 +0000 (0:00:00.852) 0:04:19.188 ******* 2026-02-20 03:15:08.344111 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-20 03:15:08.344126 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-20 03:15:08.344141 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-20 03:15:08.344157 | orchestrator | 2026-02-20 03:15:08.344182 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-20 03:15:08.344197 | orchestrator | Friday 20 February 2026 03:15:06 +0000 (0:00:01.048) 0:04:20.236 ******* 2026-02-20 03:15:08.344214 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-20 03:15:08.344229 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-20 03:15:08.344244 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-20 03:15:08.344260 | orchestrator | 2026-02-20 03:15:08.344276 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-20 03:15:08.344290 | orchestrator | Friday 20 February 2026 03:15:07 +0000 (0:00:00.917) 0:04:21.154 ******* 2026-02-20 03:15:08.344306 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:15:08.344323 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:15:08.344348 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:15:29.142450 | orchestrator | 2026-02-20 03:15:29.142592 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-20 03:15:29.142612 | orchestrator | Friday 20 February 2026 03:15:08 +0000 (0:00:00.504) 0:04:21.658 ******* 2026-02-20 03:15:29.142626 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:15:29.142640 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:15:29.142671 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:15:29.142692 | orchestrator | 2026-02-20 03:15:29.142706 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-20 03:15:29.142719 | orchestrator | Friday 20 February 2026 03:15:08 +0000 (0:00:00.483) 0:04:22.142 ******* 2026-02-20 03:15:29.142733 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-20 03:15:29.142747 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-20 03:15:29.142760 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-20 03:15:29.142773 | orchestrator | 2026-02-20 03:15:29.142786 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-20 03:15:29.142799 | orchestrator | Friday 20 February 2026 03:15:10 +0000 (0:00:01.380) 0:04:23.523 ******* 2026-02-20 03:15:29.142811 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-20 03:15:29.142825 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-20 03:15:29.142837 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-20 03:15:29.142850 | orchestrator | 2026-02-20 03:15:29.142863 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-20 03:15:29.142876 | orchestrator | Friday 20 February 2026 03:15:11 +0000 (0:00:01.172) 0:04:24.696 ******* 2026-02-20 03:15:29.142889 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-20 03:15:29.142902 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-20 03:15:29.142914 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-20 03:15:29.142927 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-20 03:15:29.142940 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-20 03:15:29.142952 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-20 03:15:29.142965 | orchestrator | 2026-02-20 03:15:29.143001 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-20 03:15:29.143016 | orchestrator | Friday 20 February 2026 03:15:15 +0000 (0:00:03.723) 0:04:28.419 ******* 2026-02-20 03:15:29.143030 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:15:29.143045 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:15:29.143059 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:15:29.143072 | orchestrator | 2026-02-20 03:15:29.143086 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-20 03:15:29.143100 | orchestrator | Friday 20 February 2026 03:15:15 +0000 (0:00:00.297) 0:04:28.717 ******* 2026-02-20 03:15:29.143115 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:15:29.143129 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:15:29.143142 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:15:29.143156 | orchestrator | 2026-02-20 03:15:29.143170 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-20 03:15:29.143206 | orchestrator | Friday 20 February 2026 03:15:15 +0000 (0:00:00.468) 0:04:29.185 ******* 2026-02-20 03:15:29.143221 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:15:29.143235 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:15:29.143248 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:15:29.143262 | orchestrator | 2026-02-20 03:15:29.143275 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-20 03:15:29.143288 | orchestrator | Friday 20 February 2026 03:15:17 +0000 (0:00:01.160) 0:04:30.346 ******* 2026-02-20 03:15:29.143302 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-20 03:15:29.143317 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-20 03:15:29.143330 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-20 03:15:29.143342 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-20 03:15:29.143355 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-20 03:15:29.143367 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-20 03:15:29.143379 | orchestrator | 2026-02-20 03:15:29.143392 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-20 03:15:29.143405 | orchestrator | Friday 20 February 2026 03:15:20 +0000 (0:00:03.308) 0:04:33.654 ******* 2026-02-20 03:15:29.143418 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-20 03:15:29.143431 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-20 03:15:29.143443 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-20 03:15:29.143456 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-20 03:15:29.143468 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:15:29.143481 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-20 03:15:29.143493 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:15:29.143506 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-20 03:15:29.143519 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:15:29.143531 | orchestrator | 2026-02-20 03:15:29.143544 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-20 03:15:29.143558 | orchestrator | Friday 20 February 2026 03:15:23 +0000 (0:00:03.273) 0:04:36.927 ******* 2026-02-20 03:15:29.143590 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:15:29.143604 | orchestrator | 2026-02-20 03:15:29.143617 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-20 03:15:29.143629 | orchestrator | Friday 20 February 2026 03:15:23 +0000 (0:00:00.128) 0:04:37.056 ******* 2026-02-20 03:15:29.143641 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:15:29.143659 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:15:29.143672 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:15:29.143685 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:15:29.143698 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:15:29.143710 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:15:29.143723 | orchestrator | 2026-02-20 03:15:29.143735 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-20 03:15:29.143748 | orchestrator | Friday 20 February 2026 03:15:24 +0000 (0:00:00.799) 0:04:37.856 ******* 2026-02-20 03:15:29.143760 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-20 03:15:29.143773 | orchestrator | 2026-02-20 03:15:29.143785 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-20 03:15:29.143820 | orchestrator | Friday 20 February 2026 03:15:25 +0000 (0:00:00.650) 0:04:38.506 ******* 2026-02-20 03:15:29.143841 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:15:29.143853 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:15:29.143864 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:15:29.143876 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:15:29.143888 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:15:29.143900 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:15:29.143912 | orchestrator | 2026-02-20 03:15:29.143924 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-20 03:15:29.143936 | orchestrator | Friday 20 February 2026 03:15:25 +0000 (0:00:00.742) 0:04:39.248 ******* 2026-02-20 03:15:29.143953 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-20 03:15:29.143970 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-20 03:15:29.143999 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-20 03:15:29.144029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-20 03:15:30.061106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-20 03:15:30.061212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-20 03:15:30.061236 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-20 03:15:30.061257 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-20 03:15:30.061272 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-20 03:15:30.061283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:30.061331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:30.061367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:30.061387 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:30.061407 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:30.061427 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:30.061446 | orchestrator | 2026-02-20 03:15:30.061467 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-20 03:15:30.061487 | orchestrator | Friday 20 February 2026 03:15:29 +0000 (0:00:03.541) 0:04:42.789 ******* 2026-02-20 03:15:30.061524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-20 03:15:35.467184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-20 03:15:35.467294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-20 03:15:35.467312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-20 03:15:35.467324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-20 03:15:35.467336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-20 03:15:35.467403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:35.467422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-20 03:15:35.467434 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:35.467445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-20 03:15:35.467457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-20 03:15:35.467468 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:35.467501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:52.378401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:52.378534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:15:52.378565 | orchestrator | 2026-02-20 03:15:52.378584 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-20 03:15:52.378642 | orchestrator | Friday 20 February 2026 03:15:35 +0000 (0:00:06.265) 0:04:49.055 ******* 2026-02-20 03:15:52.378660 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:15:52.378678 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:15:52.378693 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:15:52.378707 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:15:52.378722 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:15:52.378737 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:15:52.378753 | orchestrator | 2026-02-20 03:15:52.378768 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-20 03:15:52.378784 | orchestrator | Friday 20 February 2026 03:15:37 +0000 (0:00:01.351) 0:04:50.406 ******* 2026-02-20 03:15:52.378799 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-20 03:15:52.378815 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-20 03:15:52.378831 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-20 03:15:52.378845 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-20 03:15:52.378861 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-20 03:15:52.378877 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:15:52.378893 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-20 03:15:52.378909 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-20 03:15:52.378952 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-20 03:15:52.378969 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:15:52.378985 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-20 03:15:52.379033 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:15:52.379047 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-20 03:15:52.379058 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-20 03:15:52.379068 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-20 03:15:52.379078 | orchestrator | 2026-02-20 03:15:52.379088 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-20 03:15:52.379098 | orchestrator | Friday 20 February 2026 03:15:40 +0000 (0:00:03.440) 0:04:53.847 ******* 2026-02-20 03:15:52.379108 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:15:52.379118 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:15:52.379128 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:15:52.379138 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:15:52.379147 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:15:52.379157 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:15:52.379166 | orchestrator | 2026-02-20 03:15:52.379176 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-20 03:15:52.379187 | orchestrator | Friday 20 February 2026 03:15:41 +0000 (0:00:00.588) 0:04:54.436 ******* 2026-02-20 03:15:52.379197 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-20 03:15:52.379221 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-20 03:15:52.379231 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-20 03:15:52.379241 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-20 03:15:52.379272 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-20 03:15:52.379283 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-20 03:15:52.379292 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-20 03:15:52.379302 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-20 03:15:52.379310 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-20 03:15:52.379319 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:15:52.379328 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-20 03:15:52.379336 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-20 03:15:52.379345 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:15:52.379353 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-20 03:15:52.379362 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:15:52.379370 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-20 03:15:52.379379 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-20 03:15:52.379388 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-20 03:15:52.379405 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-20 03:15:52.379414 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-20 03:15:52.379422 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-20 03:15:52.379431 | orchestrator | 2026-02-20 03:15:52.379439 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-20 03:15:52.379448 | orchestrator | Friday 20 February 2026 03:15:46 +0000 (0:00:05.000) 0:04:59.436 ******* 2026-02-20 03:15:52.379457 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-20 03:15:52.379466 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-20 03:15:52.379474 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-20 03:15:52.379483 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-20 03:15:52.379494 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-20 03:15:52.379507 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-20 03:15:52.379523 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-20 03:15:52.379533 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-20 03:15:52.379541 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-20 03:15:52.379550 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-20 03:15:52.379558 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-20 03:15:52.379567 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-20 03:15:52.379576 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-20 03:15:52.379584 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:15:52.379593 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-20 03:15:52.379602 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:15:52.379610 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-20 03:15:52.379619 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:15:52.379627 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-20 03:15:52.379636 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-20 03:15:52.379644 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-20 03:15:52.379657 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-20 03:15:52.379666 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-20 03:15:52.379675 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-20 03:15:52.379683 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-20 03:15:52.379697 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-20 03:15:56.774727 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-20 03:15:56.774832 | orchestrator | 2026-02-20 03:15:56.774858 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-20 03:15:56.774872 | orchestrator | Friday 20 February 2026 03:15:52 +0000 (0:00:06.231) 0:05:05.668 ******* 2026-02-20 03:15:56.774883 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:15:56.774920 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:15:56.774931 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:15:56.774942 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:15:56.774953 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:15:56.774964 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:15:56.774974 | orchestrator | 2026-02-20 03:15:56.774985 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-20 03:15:56.774996 | orchestrator | Friday 20 February 2026 03:15:53 +0000 (0:00:00.703) 0:05:06.371 ******* 2026-02-20 03:15:56.775074 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:15:56.775085 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:15:56.775095 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:15:56.775106 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:15:56.775117 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:15:56.775128 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:15:56.775140 | orchestrator | 2026-02-20 03:15:56.775151 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-20 03:15:56.775161 | orchestrator | Friday 20 February 2026 03:15:53 +0000 (0:00:00.574) 0:05:06.945 ******* 2026-02-20 03:15:56.775172 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:15:56.775183 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:15:56.775194 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:15:56.775204 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:15:56.775215 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:15:56.775226 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:15:56.775236 | orchestrator | 2026-02-20 03:15:56.775250 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-20 03:15:56.775263 | orchestrator | Friday 20 February 2026 03:15:55 +0000 (0:00:02.033) 0:05:08.979 ******* 2026-02-20 03:15:56.775279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-20 03:15:56.775296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-20 03:15:56.775311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-20 03:15:56.775348 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:15:56.775392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-20 03:15:56.775407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-20 03:15:56.775420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-20 03:15:56.775433 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:15:56.775445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-20 03:15:56.775459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-20 03:15:56.775494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-20 03:16:00.102891 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:16:00.103067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-20 03:16:00.103098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:16:00.103114 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:16:00.103130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-20 03:16:00.103147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:16:00.103163 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:16:00.103176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-20 03:16:00.103227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:16:00.103261 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:16:00.103277 | orchestrator | 2026-02-20 03:16:00.103293 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-20 03:16:00.103310 | orchestrator | Friday 20 February 2026 03:15:57 +0000 (0:00:01.348) 0:05:10.327 ******* 2026-02-20 03:16:00.103326 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-20 03:16:00.103361 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-20 03:16:00.103378 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:16:00.103392 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-20 03:16:00.103405 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-20 03:16:00.103414 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:16:00.103422 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-20 03:16:00.103429 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-20 03:16:00.103437 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:16:00.103445 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-20 03:16:00.103453 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-20 03:16:00.103460 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:16:00.103468 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-20 03:16:00.103476 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-20 03:16:00.103484 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:16:00.103492 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-20 03:16:00.103500 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-20 03:16:00.103508 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:16:00.103516 | orchestrator | 2026-02-20 03:16:00.103524 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-20 03:16:00.103532 | orchestrator | Friday 20 February 2026 03:15:57 +0000 (0:00:00.810) 0:05:11.138 ******* 2026-02-20 03:16:00.103542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-20 03:16:00.103552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-20 03:16:00.103574 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-20 03:16:00.103591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-20 03:16:02.202743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-20 03:16:02.202878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-20 03:16:02.202897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-20 03:16:02.202937 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-20 03:16:02.202949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-20 03:16:02.202976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:16:02.203091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:16:02.203108 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-20 03:16:02.203121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:16:02.203132 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-20 03:16:02.203154 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-20 03:16:02.203167 | orchestrator | 2026-02-20 03:16:02.203179 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-20 03:16:02.203192 | orchestrator | Friday 20 February 2026 03:16:00 +0000 (0:00:02.682) 0:05:13.821 ******* 2026-02-20 03:16:02.203203 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:16:02.203215 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:16:02.203225 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:16:02.203244 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:16:02.203258 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:16:02.203270 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:16:02.203282 | orchestrator | 2026-02-20 03:16:02.203295 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-20 03:16:02.203308 | orchestrator | Friday 20 February 2026 03:16:01 +0000 (0:00:00.736) 0:05:14.557 ******* 2026-02-20 03:16:02.203321 | orchestrator | 2026-02-20 03:16:02.203334 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-20 03:16:02.203346 | orchestrator | Friday 20 February 2026 03:16:01 +0000 (0:00:00.136) 0:05:14.694 ******* 2026-02-20 03:16:02.203360 | orchestrator | 2026-02-20 03:16:02.203372 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-20 03:16:02.203385 | orchestrator | Friday 20 February 2026 03:16:01 +0000 (0:00:00.132) 0:05:14.826 ******* 2026-02-20 03:16:02.203397 | orchestrator | 2026-02-20 03:16:02.203408 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-20 03:16:02.203426 | orchestrator | Friday 20 February 2026 03:16:01 +0000 (0:00:00.133) 0:05:14.959 ******* 2026-02-20 03:18:57.763454 | orchestrator | 2026-02-20 03:18:57.763552 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-20 03:18:57.763564 | orchestrator | Friday 20 February 2026 03:16:01 +0000 (0:00:00.134) 0:05:15.094 ******* 2026-02-20 03:18:57.763572 | orchestrator | 2026-02-20 03:18:57.763579 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-20 03:18:57.763586 | orchestrator | Friday 20 February 2026 03:16:02 +0000 (0:00:00.277) 0:05:15.371 ******* 2026-02-20 03:18:57.763593 | orchestrator | 2026-02-20 03:18:57.763600 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-20 03:18:57.763607 | orchestrator | Friday 20 February 2026 03:16:02 +0000 (0:00:00.133) 0:05:15.505 ******* 2026-02-20 03:18:57.763614 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:18:57.763621 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:18:57.763647 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:18:57.763654 | orchestrator | 2026-02-20 03:18:57.763661 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-20 03:18:57.763668 | orchestrator | Friday 20 February 2026 03:16:09 +0000 (0:00:06.902) 0:05:22.407 ******* 2026-02-20 03:18:57.763675 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:18:57.763682 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:18:57.763688 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:18:57.763695 | orchestrator | 2026-02-20 03:18:57.763702 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-20 03:18:57.763709 | orchestrator | Friday 20 February 2026 03:16:23 +0000 (0:00:14.689) 0:05:37.097 ******* 2026-02-20 03:18:57.763715 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:18:57.763722 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:18:57.763729 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:18:57.763735 | orchestrator | 2026-02-20 03:18:57.763742 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-20 03:18:57.763749 | orchestrator | Friday 20 February 2026 03:16:45 +0000 (0:00:21.348) 0:05:58.445 ******* 2026-02-20 03:18:57.763756 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:18:57.763763 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:18:57.763769 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:18:57.763776 | orchestrator | 2026-02-20 03:18:57.763782 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-20 03:18:57.763789 | orchestrator | Friday 20 February 2026 03:17:23 +0000 (0:00:38.212) 0:06:36.658 ******* 2026-02-20 03:18:57.763796 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-02-20 03:18:57.763804 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-02-20 03:18:57.763811 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-02-20 03:18:57.763818 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:18:57.763825 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:18:57.763831 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:18:57.763838 | orchestrator | 2026-02-20 03:18:57.763845 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-20 03:18:57.763851 | orchestrator | Friday 20 February 2026 03:17:29 +0000 (0:00:06.203) 0:06:42.861 ******* 2026-02-20 03:18:57.763858 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:18:57.763865 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:18:57.763872 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:18:57.763879 | orchestrator | 2026-02-20 03:18:57.763885 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-20 03:18:57.763892 | orchestrator | Friday 20 February 2026 03:17:30 +0000 (0:00:00.805) 0:06:43.666 ******* 2026-02-20 03:18:57.763899 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:18:57.763906 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:18:57.763912 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:18:57.763919 | orchestrator | 2026-02-20 03:18:57.763926 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-20 03:18:57.763933 | orchestrator | Friday 20 February 2026 03:17:55 +0000 (0:00:25.017) 0:07:08.683 ******* 2026-02-20 03:18:57.763940 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:18:57.763946 | orchestrator | 2026-02-20 03:18:57.763953 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-20 03:18:57.763960 | orchestrator | Friday 20 February 2026 03:17:55 +0000 (0:00:00.127) 0:07:08.810 ******* 2026-02-20 03:18:57.763966 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:18:57.763973 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:18:57.763980 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:18:57.763986 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:18:57.763993 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:18:57.764007 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-20 03:18:57.764016 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 03:18:57.764024 | orchestrator | 2026-02-20 03:18:57.764044 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-20 03:18:57.764052 | orchestrator | Friday 20 February 2026 03:18:17 +0000 (0:00:21.552) 0:07:30.363 ******* 2026-02-20 03:18:57.764061 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:18:57.764069 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:18:57.764076 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:18:57.764084 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:18:57.764091 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:18:57.764099 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:18:57.764106 | orchestrator | 2026-02-20 03:18:57.764114 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-20 03:18:57.764121 | orchestrator | Friday 20 February 2026 03:18:24 +0000 (0:00:07.321) 0:07:37.685 ******* 2026-02-20 03:18:57.764183 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:18:57.764191 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:18:57.764199 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:18:57.764206 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:18:57.764214 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:18:57.764235 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-02-20 03:18:57.764243 | orchestrator | 2026-02-20 03:18:57.764251 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-20 03:18:57.764259 | orchestrator | Friday 20 February 2026 03:18:27 +0000 (0:00:03.114) 0:07:40.799 ******* 2026-02-20 03:18:57.764266 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 03:18:57.764274 | orchestrator | 2026-02-20 03:18:57.764282 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-20 03:18:57.764289 | orchestrator | Friday 20 February 2026 03:18:39 +0000 (0:00:12.366) 0:07:53.166 ******* 2026-02-20 03:18:57.764297 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 03:18:57.764305 | orchestrator | 2026-02-20 03:18:57.764312 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-20 03:18:57.764320 | orchestrator | Friday 20 February 2026 03:18:41 +0000 (0:00:01.419) 0:07:54.586 ******* 2026-02-20 03:18:57.764328 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:18:57.764335 | orchestrator | 2026-02-20 03:18:57.764343 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-20 03:18:57.764351 | orchestrator | Friday 20 February 2026 03:18:42 +0000 (0:00:01.539) 0:07:56.126 ******* 2026-02-20 03:18:57.764359 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 03:18:57.764366 | orchestrator | 2026-02-20 03:18:57.764374 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-20 03:18:57.764380 | orchestrator | Friday 20 February 2026 03:18:53 +0000 (0:00:11.117) 0:08:07.243 ******* 2026-02-20 03:18:57.764387 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:18:57.764395 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:18:57.764402 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:18:57.764408 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:18:57.764415 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:18:57.764421 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:18:57.764428 | orchestrator | 2026-02-20 03:18:57.764435 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-20 03:18:57.764441 | orchestrator | 2026-02-20 03:18:57.764448 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-20 03:18:57.764455 | orchestrator | Friday 20 February 2026 03:18:55 +0000 (0:00:01.709) 0:08:08.953 ******* 2026-02-20 03:18:57.764461 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:18:57.764468 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:18:57.764481 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:18:57.764487 | orchestrator | 2026-02-20 03:18:57.764494 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-20 03:18:57.764501 | orchestrator | 2026-02-20 03:18:57.764507 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-20 03:18:57.764514 | orchestrator | Friday 20 February 2026 03:18:56 +0000 (0:00:00.914) 0:08:09.868 ******* 2026-02-20 03:18:57.764521 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:18:57.764527 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:18:57.764534 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:18:57.764541 | orchestrator | 2026-02-20 03:18:57.764547 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-20 03:18:57.764554 | orchestrator | 2026-02-20 03:18:57.764561 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-20 03:18:57.764567 | orchestrator | Friday 20 February 2026 03:18:57 +0000 (0:00:00.662) 0:08:10.530 ******* 2026-02-20 03:18:57.764574 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-20 03:18:57.764581 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-20 03:18:57.764588 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-20 03:18:57.764595 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-20 03:18:57.764601 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-20 03:18:57.764608 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-20 03:18:57.764615 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:18:57.764621 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-20 03:18:57.764628 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-20 03:18:57.764635 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-20 03:18:57.764641 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-20 03:18:57.764648 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-20 03:18:57.764654 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-20 03:18:57.764661 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:18:57.764668 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-20 03:18:57.764674 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-20 03:18:57.764681 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-20 03:18:57.764692 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-20 03:18:57.764698 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-20 03:18:57.764705 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-20 03:18:57.764712 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:18:57.764718 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-20 03:18:57.764725 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-20 03:18:57.764732 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-20 03:18:57.764738 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-20 03:18:57.764745 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-20 03:18:57.764751 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-20 03:18:57.764758 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:18:57.764765 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-20 03:18:57.764776 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-20 03:19:00.655789 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-20 03:19:00.655884 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-20 03:19:00.655898 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-20 03:19:00.655908 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-20 03:19:00.655940 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:00.655951 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-20 03:19:00.655961 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-20 03:19:00.655971 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-20 03:19:00.655980 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-20 03:19:00.655990 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-20 03:19:00.656000 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-20 03:19:00.656009 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:00.656020 | orchestrator | 2026-02-20 03:19:00.656030 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-20 03:19:00.656040 | orchestrator | 2026-02-20 03:19:00.656050 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-20 03:19:00.656061 | orchestrator | Friday 20 February 2026 03:18:58 +0000 (0:00:01.278) 0:08:11.809 ******* 2026-02-20 03:19:00.656070 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-20 03:19:00.656080 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-20 03:19:00.656090 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:00.656099 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-20 03:19:00.656109 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-20 03:19:00.656119 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:00.656179 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-20 03:19:00.656190 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-20 03:19:00.656200 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:00.656210 | orchestrator | 2026-02-20 03:19:00.656219 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-20 03:19:00.656229 | orchestrator | 2026-02-20 03:19:00.656239 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-20 03:19:00.656248 | orchestrator | Friday 20 February 2026 03:18:59 +0000 (0:00:00.545) 0:08:12.355 ******* 2026-02-20 03:19:00.656258 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:00.656268 | orchestrator | 2026-02-20 03:19:00.656277 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-20 03:19:00.656287 | orchestrator | 2026-02-20 03:19:00.656297 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-20 03:19:00.656306 | orchestrator | Friday 20 February 2026 03:18:59 +0000 (0:00:00.802) 0:08:13.157 ******* 2026-02-20 03:19:00.656316 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:00.656326 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:00.656338 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:00.656349 | orchestrator | 2026-02-20 03:19:00.656360 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:19:00.656372 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:19:00.656386 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-20 03:19:00.656398 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-20 03:19:00.656409 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-20 03:19:00.656420 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-20 03:19:00.656432 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-20 03:19:00.656465 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-20 03:19:00.656477 | orchestrator | 2026-02-20 03:19:00.656488 | orchestrator | 2026-02-20 03:19:00.656499 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:19:00.656511 | orchestrator | Friday 20 February 2026 03:19:00 +0000 (0:00:00.441) 0:08:13.598 ******* 2026-02-20 03:19:00.656523 | orchestrator | =============================================================================== 2026-02-20 03:19:00.656534 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 38.21s 2026-02-20 03:19:00.656546 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.65s 2026-02-20 03:19:00.656557 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.02s 2026-02-20 03:19:00.656568 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.12s 2026-02-20 03:19:00.656579 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.55s 2026-02-20 03:19:00.656603 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.35s 2026-02-20 03:19:00.656633 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.33s 2026-02-20 03:19:00.656645 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.50s 2026-02-20 03:19:00.656657 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.69s 2026-02-20 03:19:00.656668 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.13s 2026-02-20 03:19:00.656679 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.37s 2026-02-20 03:19:00.656690 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.11s 2026-02-20 03:19:00.656700 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.40s 2026-02-20 03:19:00.656709 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.32s 2026-02-20 03:19:00.656719 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.12s 2026-02-20 03:19:00.656729 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.49s 2026-02-20 03:19:00.656738 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.57s 2026-02-20 03:19:00.656748 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.32s 2026-02-20 03:19:00.656757 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.07s 2026-02-20 03:19:00.656767 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 6.90s 2026-02-20 03:19:02.915958 | orchestrator | 2026-02-20 03:19:02 | INFO  | Task f8b2db6e-cf4d-47c5-8f86-85f276b448f7 (horizon) was prepared for execution. 2026-02-20 03:19:02.916056 | orchestrator | 2026-02-20 03:19:02 | INFO  | It takes a moment until task f8b2db6e-cf4d-47c5-8f86-85f276b448f7 (horizon) has been started and output is visible here. 2026-02-20 03:19:09.776879 | orchestrator | 2026-02-20 03:19:09.776993 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:19:09.777009 | orchestrator | 2026-02-20 03:19:09.777021 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:19:09.777032 | orchestrator | Friday 20 February 2026 03:19:06 +0000 (0:00:00.248) 0:00:00.248 ******* 2026-02-20 03:19:09.777044 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:19:09.777056 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:19:09.777067 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:19:09.777079 | orchestrator | 2026-02-20 03:19:09.777090 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:19:09.777101 | orchestrator | Friday 20 February 2026 03:19:07 +0000 (0:00:00.295) 0:00:00.544 ******* 2026-02-20 03:19:09.777190 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-20 03:19:09.777206 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-20 03:19:09.777217 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-20 03:19:09.777227 | orchestrator | 2026-02-20 03:19:09.777238 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-20 03:19:09.777249 | orchestrator | 2026-02-20 03:19:09.777260 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-20 03:19:09.777271 | orchestrator | Friday 20 February 2026 03:19:07 +0000 (0:00:00.426) 0:00:00.971 ******* 2026-02-20 03:19:09.777282 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:19:09.777294 | orchestrator | 2026-02-20 03:19:09.777305 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-20 03:19:09.777316 | orchestrator | Friday 20 February 2026 03:19:08 +0000 (0:00:00.522) 0:00:01.493 ******* 2026-02-20 03:19:09.777349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-20 03:19:09.777390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-20 03:19:09.777421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-20 03:19:09.777435 | orchestrator | 2026-02-20 03:19:09.777448 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-20 03:19:09.777461 | orchestrator | Friday 20 February 2026 03:19:09 +0000 (0:00:01.127) 0:00:02.621 ******* 2026-02-20 03:19:09.777474 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:19:09.777486 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:19:09.777499 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:19:09.777512 | orchestrator | 2026-02-20 03:19:09.777525 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-20 03:19:09.777544 | orchestrator | Friday 20 February 2026 03:19:09 +0000 (0:00:00.428) 0:00:03.049 ******* 2026-02-20 03:19:09.777565 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-20 03:19:15.325004 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-20 03:19:15.325115 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-20 03:19:15.325131 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-20 03:19:15.325202 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-20 03:19:15.325215 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-20 03:19:15.325227 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-20 03:19:15.325238 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-20 03:19:15.325249 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-20 03:19:15.325260 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-20 03:19:15.325271 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-20 03:19:15.325281 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-20 03:19:15.325292 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-20 03:19:15.325303 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-20 03:19:15.325321 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-20 03:19:15.325335 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-20 03:19:15.325346 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-20 03:19:15.325357 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-20 03:19:15.325368 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-20 03:19:15.325378 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-20 03:19:15.325389 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-20 03:19:15.325400 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-20 03:19:15.325428 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-20 03:19:15.325440 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-20 03:19:15.325452 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-20 03:19:15.325465 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-20 03:19:15.325482 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-20 03:19:15.325501 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-20 03:19:15.325518 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-20 03:19:15.325538 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-20 03:19:15.325588 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-20 03:19:15.325609 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-20 03:19:15.325629 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-20 03:19:15.325649 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-20 03:19:15.325668 | orchestrator | 2026-02-20 03:19:15.325682 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-20 03:19:15.325694 | orchestrator | Friday 20 February 2026 03:19:10 +0000 (0:00:00.692) 0:00:03.742 ******* 2026-02-20 03:19:15.325705 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:19:15.325724 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:19:15.325751 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:19:15.325771 | orchestrator | 2026-02-20 03:19:15.325789 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-20 03:19:15.325807 | orchestrator | Friday 20 February 2026 03:19:10 +0000 (0:00:00.300) 0:00:04.042 ******* 2026-02-20 03:19:15.325824 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:15.325842 | orchestrator | 2026-02-20 03:19:15.325885 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-20 03:19:15.325901 | orchestrator | Friday 20 February 2026 03:19:10 +0000 (0:00:00.271) 0:00:04.313 ******* 2026-02-20 03:19:15.325912 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:15.325922 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:15.325933 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:15.325944 | orchestrator | 2026-02-20 03:19:15.325954 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-20 03:19:15.325965 | orchestrator | Friday 20 February 2026 03:19:11 +0000 (0:00:00.284) 0:00:04.598 ******* 2026-02-20 03:19:15.325976 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:19:15.325987 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:19:15.325997 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:19:15.326008 | orchestrator | 2026-02-20 03:19:15.326084 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-20 03:19:15.326096 | orchestrator | Friday 20 February 2026 03:19:11 +0000 (0:00:00.294) 0:00:04.892 ******* 2026-02-20 03:19:15.326107 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:15.326120 | orchestrator | 2026-02-20 03:19:15.326186 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-20 03:19:15.326201 | orchestrator | Friday 20 February 2026 03:19:11 +0000 (0:00:00.117) 0:00:05.009 ******* 2026-02-20 03:19:15.326212 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:15.326222 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:15.326233 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:15.326244 | orchestrator | 2026-02-20 03:19:15.326254 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-20 03:19:15.326265 | orchestrator | Friday 20 February 2026 03:19:11 +0000 (0:00:00.265) 0:00:05.275 ******* 2026-02-20 03:19:15.326276 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:19:15.326287 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:19:15.326298 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:19:15.326308 | orchestrator | 2026-02-20 03:19:15.326319 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-20 03:19:15.326330 | orchestrator | Friday 20 February 2026 03:19:12 +0000 (0:00:00.450) 0:00:05.726 ******* 2026-02-20 03:19:15.326341 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:15.326351 | orchestrator | 2026-02-20 03:19:15.326363 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-20 03:19:15.326373 | orchestrator | Friday 20 February 2026 03:19:12 +0000 (0:00:00.133) 0:00:05.860 ******* 2026-02-20 03:19:15.326397 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:15.326408 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:15.326419 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:15.326429 | orchestrator | 2026-02-20 03:19:15.326440 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-20 03:19:15.326468 | orchestrator | Friday 20 February 2026 03:19:12 +0000 (0:00:00.294) 0:00:06.154 ******* 2026-02-20 03:19:15.326479 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:19:15.326496 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:19:15.326514 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:19:15.326525 | orchestrator | 2026-02-20 03:19:15.326536 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-20 03:19:15.326547 | orchestrator | Friday 20 February 2026 03:19:13 +0000 (0:00:00.333) 0:00:06.488 ******* 2026-02-20 03:19:15.326558 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:15.326568 | orchestrator | 2026-02-20 03:19:15.326580 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-20 03:19:15.326598 | orchestrator | Friday 20 February 2026 03:19:13 +0000 (0:00:00.116) 0:00:06.604 ******* 2026-02-20 03:19:15.326615 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:15.326632 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:15.326652 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:15.326671 | orchestrator | 2026-02-20 03:19:15.326688 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-20 03:19:15.326699 | orchestrator | Friday 20 February 2026 03:19:13 +0000 (0:00:00.496) 0:00:07.100 ******* 2026-02-20 03:19:15.326710 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:19:15.326721 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:19:15.326732 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:19:15.326742 | orchestrator | 2026-02-20 03:19:15.326753 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-20 03:19:15.326764 | orchestrator | Friday 20 February 2026 03:19:14 +0000 (0:00:00.303) 0:00:07.404 ******* 2026-02-20 03:19:15.326775 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:15.326785 | orchestrator | 2026-02-20 03:19:15.326797 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-20 03:19:15.326807 | orchestrator | Friday 20 February 2026 03:19:14 +0000 (0:00:00.122) 0:00:07.527 ******* 2026-02-20 03:19:15.326818 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:15.326829 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:15.326839 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:15.326850 | orchestrator | 2026-02-20 03:19:15.326861 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-20 03:19:15.326871 | orchestrator | Friday 20 February 2026 03:19:14 +0000 (0:00:00.285) 0:00:07.812 ******* 2026-02-20 03:19:15.326882 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:19:15.326906 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:19:15.326918 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:19:15.326929 | orchestrator | 2026-02-20 03:19:15.326943 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-20 03:19:15.326960 | orchestrator | Friday 20 February 2026 03:19:14 +0000 (0:00:00.296) 0:00:08.109 ******* 2026-02-20 03:19:15.326972 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:15.326983 | orchestrator | 2026-02-20 03:19:15.326993 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-20 03:19:15.327004 | orchestrator | Friday 20 February 2026 03:19:15 +0000 (0:00:00.282) 0:00:08.392 ******* 2026-02-20 03:19:15.327015 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:15.327025 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:15.327036 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:15.327054 | orchestrator | 2026-02-20 03:19:15.327067 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-20 03:19:15.327088 | orchestrator | Friday 20 February 2026 03:19:15 +0000 (0:00:00.303) 0:00:08.695 ******* 2026-02-20 03:19:28.493518 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:19:28.493633 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:19:28.493646 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:19:28.493655 | orchestrator | 2026-02-20 03:19:28.493665 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-20 03:19:28.493675 | orchestrator | Friday 20 February 2026 03:19:15 +0000 (0:00:00.313) 0:00:09.009 ******* 2026-02-20 03:19:28.493684 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:28.493694 | orchestrator | 2026-02-20 03:19:28.493704 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-20 03:19:28.493713 | orchestrator | Friday 20 February 2026 03:19:15 +0000 (0:00:00.122) 0:00:09.131 ******* 2026-02-20 03:19:28.493722 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:28.493731 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:28.493739 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:28.493748 | orchestrator | 2026-02-20 03:19:28.493757 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-20 03:19:28.493765 | orchestrator | Friday 20 February 2026 03:19:16 +0000 (0:00:00.289) 0:00:09.421 ******* 2026-02-20 03:19:28.493774 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:19:28.493783 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:19:28.493791 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:19:28.493800 | orchestrator | 2026-02-20 03:19:28.493809 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-20 03:19:28.493817 | orchestrator | Friday 20 February 2026 03:19:16 +0000 (0:00:00.454) 0:00:09.875 ******* 2026-02-20 03:19:28.493826 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:28.493835 | orchestrator | 2026-02-20 03:19:28.493844 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-20 03:19:28.493860 | orchestrator | Friday 20 February 2026 03:19:16 +0000 (0:00:00.129) 0:00:10.005 ******* 2026-02-20 03:19:28.493875 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:28.493889 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:28.493904 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:28.493919 | orchestrator | 2026-02-20 03:19:28.493934 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-20 03:19:28.493950 | orchestrator | Friday 20 February 2026 03:19:16 +0000 (0:00:00.279) 0:00:10.284 ******* 2026-02-20 03:19:28.493966 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:19:28.493981 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:19:28.493997 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:19:28.494008 | orchestrator | 2026-02-20 03:19:28.494067 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-20 03:19:28.494078 | orchestrator | Friday 20 February 2026 03:19:17 +0000 (0:00:00.298) 0:00:10.582 ******* 2026-02-20 03:19:28.494088 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:28.494098 | orchestrator | 2026-02-20 03:19:28.494108 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-20 03:19:28.494132 | orchestrator | Friday 20 February 2026 03:19:17 +0000 (0:00:00.118) 0:00:10.700 ******* 2026-02-20 03:19:28.494142 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:28.494175 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:28.494186 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:28.494196 | orchestrator | 2026-02-20 03:19:28.494206 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-20 03:19:28.494216 | orchestrator | Friday 20 February 2026 03:19:17 +0000 (0:00:00.437) 0:00:11.138 ******* 2026-02-20 03:19:28.494226 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:19:28.494237 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:19:28.494246 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:19:28.494254 | orchestrator | 2026-02-20 03:19:28.494263 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-20 03:19:28.494271 | orchestrator | Friday 20 February 2026 03:19:18 +0000 (0:00:00.314) 0:00:11.452 ******* 2026-02-20 03:19:28.494280 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:28.494309 | orchestrator | 2026-02-20 03:19:28.494318 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-20 03:19:28.494327 | orchestrator | Friday 20 February 2026 03:19:18 +0000 (0:00:00.132) 0:00:11.585 ******* 2026-02-20 03:19:28.494336 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:28.494344 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:28.494353 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:28.494362 | orchestrator | 2026-02-20 03:19:28.494371 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-20 03:19:28.494379 | orchestrator | Friday 20 February 2026 03:19:18 +0000 (0:00:00.280) 0:00:11.865 ******* 2026-02-20 03:19:28.494388 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:19:28.494396 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:19:28.494405 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:19:28.494413 | orchestrator | 2026-02-20 03:19:28.494422 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-20 03:19:28.494431 | orchestrator | Friday 20 February 2026 03:19:20 +0000 (0:00:01.729) 0:00:13.595 ******* 2026-02-20 03:19:28.494440 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-20 03:19:28.494450 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-20 03:19:28.494458 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-20 03:19:28.494467 | orchestrator | 2026-02-20 03:19:28.494476 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-20 03:19:28.494484 | orchestrator | Friday 20 February 2026 03:19:22 +0000 (0:00:01.823) 0:00:15.418 ******* 2026-02-20 03:19:28.494493 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-20 03:19:28.494503 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-20 03:19:28.494511 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-20 03:19:28.494520 | orchestrator | 2026-02-20 03:19:28.494529 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-20 03:19:28.494553 | orchestrator | Friday 20 February 2026 03:19:23 +0000 (0:00:01.747) 0:00:17.166 ******* 2026-02-20 03:19:28.494563 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-20 03:19:28.494572 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-20 03:19:28.494580 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-20 03:19:28.494589 | orchestrator | 2026-02-20 03:19:28.494598 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-20 03:19:28.494606 | orchestrator | Friday 20 February 2026 03:19:25 +0000 (0:00:01.502) 0:00:18.668 ******* 2026-02-20 03:19:28.494615 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:28.494624 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:28.494632 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:28.494641 | orchestrator | 2026-02-20 03:19:28.494649 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-20 03:19:28.494658 | orchestrator | Friday 20 February 2026 03:19:25 +0000 (0:00:00.466) 0:00:19.135 ******* 2026-02-20 03:19:28.494667 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:28.494675 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:28.494684 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:28.494693 | orchestrator | 2026-02-20 03:19:28.494701 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-20 03:19:28.494710 | orchestrator | Friday 20 February 2026 03:19:26 +0000 (0:00:00.282) 0:00:19.417 ******* 2026-02-20 03:19:28.494719 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:19:28.494733 | orchestrator | 2026-02-20 03:19:28.494742 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-20 03:19:28.494751 | orchestrator | Friday 20 February 2026 03:19:26 +0000 (0:00:00.605) 0:00:20.023 ******* 2026-02-20 03:19:28.494771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-20 03:19:28.494794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-20 03:19:29.095467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-20 03:19:29.095569 | orchestrator | 2026-02-20 03:19:29.095585 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-20 03:19:29.095598 | orchestrator | Friday 20 February 2026 03:19:28 +0000 (0:00:01.829) 0:00:21.852 ******* 2026-02-20 03:19:29.095638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-20 03:19:29.095673 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:29.095687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-20 03:19:29.095700 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:29.095727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-20 03:19:31.469749 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:31.469890 | orchestrator | 2026-02-20 03:19:31.469915 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-20 03:19:31.469938 | orchestrator | Friday 20 February 2026 03:19:29 +0000 (0:00:00.608) 0:00:22.461 ******* 2026-02-20 03:19:31.469963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-20 03:19:31.470111 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:19:31.470198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-20 03:19:31.470224 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:19:31.470249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-20 03:19:31.470347 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:19:31.470363 | orchestrator | 2026-02-20 03:19:31.470377 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-20 03:19:31.470390 | orchestrator | Friday 20 February 2026 03:19:29 +0000 (0:00:00.785) 0:00:23.246 ******* 2026-02-20 03:19:31.470424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-20 03:20:17.148983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-20 03:20:17.149299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-20 03:20:17.149343 | orchestrator | 2026-02-20 03:20:17.149366 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-20 03:20:17.149386 | orchestrator | Friday 20 February 2026 03:19:31 +0000 (0:00:01.588) 0:00:24.834 ******* 2026-02-20 03:20:17.149422 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:20:17.149442 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:20:17.149453 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:20:17.149467 | orchestrator | 2026-02-20 03:20:17.149479 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-20 03:20:17.149492 | orchestrator | Friday 20 February 2026 03:19:31 +0000 (0:00:00.305) 0:00:25.140 ******* 2026-02-20 03:20:17.149504 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:20:17.149529 | orchestrator | 2026-02-20 03:20:17.149541 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-20 03:20:17.149553 | orchestrator | Friday 20 February 2026 03:19:32 +0000 (0:00:00.507) 0:00:25.647 ******* 2026-02-20 03:20:17.149566 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:20:17.149579 | orchestrator | 2026-02-20 03:20:17.149590 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-20 03:20:17.149602 | orchestrator | Friday 20 February 2026 03:19:34 +0000 (0:00:02.157) 0:00:27.805 ******* 2026-02-20 03:20:17.149614 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:20:17.149626 | orchestrator | 2026-02-20 03:20:17.149638 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-20 03:20:17.149650 | orchestrator | Friday 20 February 2026 03:19:36 +0000 (0:00:02.510) 0:00:30.315 ******* 2026-02-20 03:20:17.149662 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:20:17.149674 | orchestrator | 2026-02-20 03:20:17.149685 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-20 03:20:17.149697 | orchestrator | Friday 20 February 2026 03:19:52 +0000 (0:00:16.009) 0:00:46.325 ******* 2026-02-20 03:20:17.149710 | orchestrator | 2026-02-20 03:20:17.149722 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-20 03:20:17.149734 | orchestrator | Friday 20 February 2026 03:19:53 +0000 (0:00:00.082) 0:00:46.408 ******* 2026-02-20 03:20:17.149745 | orchestrator | 2026-02-20 03:20:17.149757 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-20 03:20:17.149769 | orchestrator | Friday 20 February 2026 03:19:53 +0000 (0:00:00.067) 0:00:46.475 ******* 2026-02-20 03:20:17.149781 | orchestrator | 2026-02-20 03:20:17.149793 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-20 03:20:17.149805 | orchestrator | Friday 20 February 2026 03:19:53 +0000 (0:00:00.069) 0:00:46.545 ******* 2026-02-20 03:20:17.149817 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:20:17.149828 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:20:17.149839 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:20:17.149850 | orchestrator | 2026-02-20 03:20:17.149860 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:20:17.149872 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-20 03:20:17.149884 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-20 03:20:17.149902 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-20 03:20:17.149913 | orchestrator | 2026-02-20 03:20:17.149924 | orchestrator | 2026-02-20 03:20:17.149935 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:20:17.149947 | orchestrator | Friday 20 February 2026 03:20:17 +0000 (0:00:23.956) 0:01:10.501 ******* 2026-02-20 03:20:17.149957 | orchestrator | =============================================================================== 2026-02-20 03:20:17.149968 | orchestrator | horizon : Restart horizon container ------------------------------------ 23.96s 2026-02-20 03:20:17.149994 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.01s 2026-02-20 03:20:17.150006 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.51s 2026-02-20 03:20:17.150094 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.16s 2026-02-20 03:20:17.150118 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.83s 2026-02-20 03:20:17.150137 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.82s 2026-02-20 03:20:17.150148 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.75s 2026-02-20 03:20:17.150168 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.73s 2026-02-20 03:20:17.150178 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.59s 2026-02-20 03:20:17.150268 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.50s 2026-02-20 03:20:17.150282 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.13s 2026-02-20 03:20:17.150292 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.79s 2026-02-20 03:20:17.150303 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.69s 2026-02-20 03:20:17.150328 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.61s 2026-02-20 03:20:17.504337 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.61s 2026-02-20 03:20:17.504433 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-02-20 03:20:17.504446 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2026-02-20 03:20:17.504456 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.50s 2026-02-20 03:20:17.504466 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.47s 2026-02-20 03:20:17.504476 | orchestrator | horizon : Update policy file name --------------------------------------- 0.45s 2026-02-20 03:20:19.773923 | orchestrator | 2026-02-20 03:20:19 | INFO  | Task 6cd333db-2e76-4a4f-9402-a212f2f462ab (skyline) was prepared for execution. 2026-02-20 03:20:19.774094 | orchestrator | 2026-02-20 03:20:19 | INFO  | It takes a moment until task 6cd333db-2e76-4a4f-9402-a212f2f462ab (skyline) has been started and output is visible here. 2026-02-20 03:20:49.592659 | orchestrator | 2026-02-20 03:20:49.592775 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:20:49.592791 | orchestrator | 2026-02-20 03:20:49.592804 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:20:49.592815 | orchestrator | Friday 20 February 2026 03:20:23 +0000 (0:00:00.245) 0:00:00.245 ******* 2026-02-20 03:20:49.592826 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:20:49.592838 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:20:49.592849 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:20:49.592859 | orchestrator | 2026-02-20 03:20:49.592870 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:20:49.592881 | orchestrator | Friday 20 February 2026 03:20:24 +0000 (0:00:00.287) 0:00:00.533 ******* 2026-02-20 03:20:49.592892 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-02-20 03:20:49.592904 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-02-20 03:20:49.592914 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-02-20 03:20:49.592925 | orchestrator | 2026-02-20 03:20:49.592936 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-02-20 03:20:49.592947 | orchestrator | 2026-02-20 03:20:49.592958 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-20 03:20:49.592969 | orchestrator | Friday 20 February 2026 03:20:24 +0000 (0:00:00.410) 0:00:00.943 ******* 2026-02-20 03:20:49.592980 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:20:49.592991 | orchestrator | 2026-02-20 03:20:49.593002 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-02-20 03:20:49.593013 | orchestrator | Friday 20 February 2026 03:20:24 +0000 (0:00:00.518) 0:00:01.461 ******* 2026-02-20 03:20:49.593024 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-02-20 03:20:49.593034 | orchestrator | 2026-02-20 03:20:49.593045 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-02-20 03:20:49.593056 | orchestrator | Friday 20 February 2026 03:20:28 +0000 (0:00:03.224) 0:00:04.686 ******* 2026-02-20 03:20:49.593067 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-02-20 03:20:49.593106 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-02-20 03:20:49.593117 | orchestrator | 2026-02-20 03:20:49.593128 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-02-20 03:20:49.593153 | orchestrator | Friday 20 February 2026 03:20:34 +0000 (0:00:06.267) 0:00:10.954 ******* 2026-02-20 03:20:49.593164 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-20 03:20:49.593176 | orchestrator | 2026-02-20 03:20:49.593187 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-02-20 03:20:49.593199 | orchestrator | Friday 20 February 2026 03:20:37 +0000 (0:00:03.152) 0:00:14.106 ******* 2026-02-20 03:20:49.593243 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-20 03:20:49.593263 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-02-20 03:20:49.593283 | orchestrator | 2026-02-20 03:20:49.593302 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-02-20 03:20:49.593321 | orchestrator | Friday 20 February 2026 03:20:41 +0000 (0:00:03.869) 0:00:17.975 ******* 2026-02-20 03:20:49.593334 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-20 03:20:49.593347 | orchestrator | 2026-02-20 03:20:49.593359 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-02-20 03:20:49.593371 | orchestrator | Friday 20 February 2026 03:20:44 +0000 (0:00:03.139) 0:00:21.115 ******* 2026-02-20 03:20:49.593383 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-02-20 03:20:49.593394 | orchestrator | 2026-02-20 03:20:49.593406 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-02-20 03:20:49.593418 | orchestrator | Friday 20 February 2026 03:20:48 +0000 (0:00:03.695) 0:00:24.811 ******* 2026-02-20 03:20:49.593434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 03:20:49.593473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 03:20:49.593495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 03:20:49.593540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 03:20:49.593557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 03:20:49.593577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 03:20:53.245445 | orchestrator | 2026-02-20 03:20:53.245569 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-20 03:20:53.245595 | orchestrator | Friday 20 February 2026 03:20:49 +0000 (0:00:01.291) 0:00:26.103 ******* 2026-02-20 03:20:53.245615 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:20:53.245635 | orchestrator | 2026-02-20 03:20:53.245657 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-02-20 03:20:53.245746 | orchestrator | Friday 20 February 2026 03:20:50 +0000 (0:00:00.659) 0:00:26.763 ******* 2026-02-20 03:20:53.245774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 03:20:53.245818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 03:20:53.245840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 03:20:53.245889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 03:20:53.245913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 03:20:53.245959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 03:20:53.245983 | orchestrator | 2026-02-20 03:20:53.246006 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-02-20 03:20:53.246102 | orchestrator | Friday 20 February 2026 03:20:52 +0000 (0:00:02.393) 0:00:29.156 ******* 2026-02-20 03:20:53.246125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-20 03:20:53.246147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-20 03:20:53.246169 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:20:53.246326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-20 03:20:54.456493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-20 03:20:54.456581 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:20:54.456600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-20 03:20:54.456617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-20 03:20:54.456631 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:20:54.456646 | orchestrator | 2026-02-20 03:20:54.456662 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-02-20 03:20:54.456716 | orchestrator | Friday 20 February 2026 03:20:53 +0000 (0:00:00.608) 0:00:29.765 ******* 2026-02-20 03:20:54.456733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-20 03:20:54.456766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-20 03:20:54.456776 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:20:54.456784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-20 03:20:54.456793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-20 03:20:54.456807 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:20:54.456815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-20 03:20:54.456831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-20 03:21:02.741330 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:21:02.741446 | orchestrator | 2026-02-20 03:21:02.741481 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-02-20 03:21:02.741495 | orchestrator | Friday 20 February 2026 03:20:54 +0000 (0:00:01.203) 0:00:30.968 ******* 2026-02-20 03:21:02.741509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:02.741525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:02.741558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:02.741571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:02.741609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:02.741623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:02.741643 | orchestrator | 2026-02-20 03:21:02.741655 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-02-20 03:21:02.741666 | orchestrator | Friday 20 February 2026 03:20:56 +0000 (0:00:02.474) 0:00:33.443 ******* 2026-02-20 03:21:02.741677 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-20 03:21:02.741688 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-20 03:21:02.741699 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-20 03:21:02.741710 | orchestrator | 2026-02-20 03:21:02.741721 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-02-20 03:21:02.741731 | orchestrator | Friday 20 February 2026 03:20:58 +0000 (0:00:01.516) 0:00:34.959 ******* 2026-02-20 03:21:02.741742 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-20 03:21:02.741753 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-20 03:21:02.741764 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-20 03:21:02.741775 | orchestrator | 2026-02-20 03:21:02.741787 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-02-20 03:21:02.741804 | orchestrator | Friday 20 February 2026 03:21:00 +0000 (0:00:01.961) 0:00:36.920 ******* 2026-02-20 03:21:02.741824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:02.741874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:04.845852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:04.845979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:04.845997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:04.846079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:04.846096 | orchestrator | 2026-02-20 03:21:04.846109 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-02-20 03:21:04.846122 | orchestrator | Friday 20 February 2026 03:21:02 +0000 (0:00:02.341) 0:00:39.261 ******* 2026-02-20 03:21:04.846133 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:21:04.846145 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:21:04.846156 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:21:04.846167 | orchestrator | 2026-02-20 03:21:04.846266 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-02-20 03:21:04.846279 | orchestrator | Friday 20 February 2026 03:21:03 +0000 (0:00:00.302) 0:00:39.564 ******* 2026-02-20 03:21:04.846291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:04.846315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:04.846327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:04.846345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:04.846368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:38.853245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-20 03:21:38.853368 | orchestrator | 2026-02-20 03:21:38.853386 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-02-20 03:21:38.853399 | orchestrator | Friday 20 February 2026 03:21:04 +0000 (0:00:01.800) 0:00:41.364 ******* 2026-02-20 03:21:38.853411 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:21:38.853423 | orchestrator | 2026-02-20 03:21:38.853435 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-02-20 03:21:38.853446 | orchestrator | Friday 20 February 2026 03:21:06 +0000 (0:00:01.956) 0:00:43.321 ******* 2026-02-20 03:21:38.853458 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:21:38.853469 | orchestrator | 2026-02-20 03:21:38.853480 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-02-20 03:21:38.853491 | orchestrator | Friday 20 February 2026 03:21:08 +0000 (0:00:02.141) 0:00:45.462 ******* 2026-02-20 03:21:38.853502 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:21:38.853513 | orchestrator | 2026-02-20 03:21:38.853523 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-20 03:21:38.853534 | orchestrator | Friday 20 February 2026 03:21:16 +0000 (0:00:07.641) 0:00:53.104 ******* 2026-02-20 03:21:38.853545 | orchestrator | 2026-02-20 03:21:38.853556 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-20 03:21:38.853567 | orchestrator | Friday 20 February 2026 03:21:16 +0000 (0:00:00.064) 0:00:53.169 ******* 2026-02-20 03:21:38.853577 | orchestrator | 2026-02-20 03:21:38.853588 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-20 03:21:38.853599 | orchestrator | Friday 20 February 2026 03:21:16 +0000 (0:00:00.065) 0:00:53.234 ******* 2026-02-20 03:21:38.853609 | orchestrator | 2026-02-20 03:21:38.853620 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-02-20 03:21:38.853631 | orchestrator | Friday 20 February 2026 03:21:16 +0000 (0:00:00.067) 0:00:53.301 ******* 2026-02-20 03:21:38.853642 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:21:38.853652 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:21:38.853663 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:21:38.853674 | orchestrator | 2026-02-20 03:21:38.853686 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-02-20 03:21:38.853699 | orchestrator | Friday 20 February 2026 03:21:24 +0000 (0:00:07.981) 0:01:01.283 ******* 2026-02-20 03:21:38.853711 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:21:38.853724 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:21:38.853736 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:21:38.853774 | orchestrator | 2026-02-20 03:21:38.853787 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:21:38.853800 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-20 03:21:38.853829 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-20 03:21:38.853841 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-20 03:21:38.853854 | orchestrator | 2026-02-20 03:21:38.853866 | orchestrator | 2026-02-20 03:21:38.853878 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:21:38.853889 | orchestrator | Friday 20 February 2026 03:21:38 +0000 (0:00:13.815) 0:01:15.098 ******* 2026-02-20 03:21:38.853900 | orchestrator | =============================================================================== 2026-02-20 03:21:38.853911 | orchestrator | skyline : Restart skyline-console container ---------------------------- 13.82s 2026-02-20 03:21:38.853921 | orchestrator | skyline : Restart skyline-apiserver container --------------------------- 7.98s 2026-02-20 03:21:38.853932 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.64s 2026-02-20 03:21:38.853943 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.27s 2026-02-20 03:21:38.853953 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 3.87s 2026-02-20 03:21:38.853964 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.70s 2026-02-20 03:21:38.853974 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.22s 2026-02-20 03:21:38.853985 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.15s 2026-02-20 03:21:38.854014 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.14s 2026-02-20 03:21:38.854085 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.47s 2026-02-20 03:21:38.854096 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.39s 2026-02-20 03:21:38.854126 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.34s 2026-02-20 03:21:38.854137 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.14s 2026-02-20 03:21:38.854148 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 1.96s 2026-02-20 03:21:38.854159 | orchestrator | skyline : Creating Skyline database ------------------------------------- 1.96s 2026-02-20 03:21:38.854169 | orchestrator | skyline : Check skyline container --------------------------------------- 1.80s 2026-02-20 03:21:38.854180 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.52s 2026-02-20 03:21:38.854191 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.29s 2026-02-20 03:21:38.854202 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.20s 2026-02-20 03:21:38.854212 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.66s 2026-02-20 03:21:41.049949 | orchestrator | 2026-02-20 03:21:41 | INFO  | Task 85f8f3d8-ba03-497a-bbc0-b22122af5d32 (glance) was prepared for execution. 2026-02-20 03:21:41.050079 | orchestrator | 2026-02-20 03:21:41 | INFO  | It takes a moment until task 85f8f3d8-ba03-497a-bbc0-b22122af5d32 (glance) has been started and output is visible here. 2026-02-20 03:22:13.109062 | orchestrator | 2026-02-20 03:22:13.109178 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:22:13.109194 | orchestrator | 2026-02-20 03:22:13.109206 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:22:13.109218 | orchestrator | Friday 20 February 2026 03:21:45 +0000 (0:00:00.251) 0:00:00.251 ******* 2026-02-20 03:22:13.109229 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:22:13.109264 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:22:13.109276 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:22:13.109287 | orchestrator | 2026-02-20 03:22:13.109298 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:22:13.109309 | orchestrator | Friday 20 February 2026 03:21:45 +0000 (0:00:00.263) 0:00:00.514 ******* 2026-02-20 03:22:13.109320 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-20 03:22:13.109332 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-20 03:22:13.109342 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-20 03:22:13.109353 | orchestrator | 2026-02-20 03:22:13.109364 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-20 03:22:13.109375 | orchestrator | 2026-02-20 03:22:13.109386 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-20 03:22:13.109396 | orchestrator | Friday 20 February 2026 03:21:45 +0000 (0:00:00.311) 0:00:00.826 ******* 2026-02-20 03:22:13.109407 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:22:13.109419 | orchestrator | 2026-02-20 03:22:13.109430 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-20 03:22:13.109441 | orchestrator | Friday 20 February 2026 03:21:46 +0000 (0:00:00.403) 0:00:01.229 ******* 2026-02-20 03:22:13.109451 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-20 03:22:13.109463 | orchestrator | 2026-02-20 03:22:13.109481 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-20 03:22:13.109500 | orchestrator | Friday 20 February 2026 03:21:49 +0000 (0:00:03.262) 0:00:04.492 ******* 2026-02-20 03:22:13.109519 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-20 03:22:13.109538 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-20 03:22:13.109555 | orchestrator | 2026-02-20 03:22:13.109592 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-20 03:22:13.109612 | orchestrator | Friday 20 February 2026 03:21:55 +0000 (0:00:06.152) 0:00:10.645 ******* 2026-02-20 03:22:13.109630 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-20 03:22:13.109651 | orchestrator | 2026-02-20 03:22:13.109670 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-20 03:22:13.109688 | orchestrator | Friday 20 February 2026 03:21:58 +0000 (0:00:03.151) 0:00:13.796 ******* 2026-02-20 03:22:13.109707 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-20 03:22:13.109726 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-20 03:22:13.109744 | orchestrator | 2026-02-20 03:22:13.109763 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-20 03:22:13.109781 | orchestrator | Friday 20 February 2026 03:22:02 +0000 (0:00:03.930) 0:00:17.727 ******* 2026-02-20 03:22:13.109800 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-20 03:22:13.109818 | orchestrator | 2026-02-20 03:22:13.109836 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-20 03:22:13.109847 | orchestrator | Friday 20 February 2026 03:22:05 +0000 (0:00:03.086) 0:00:20.813 ******* 2026-02-20 03:22:13.109857 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-20 03:22:13.109868 | orchestrator | 2026-02-20 03:22:13.109878 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-20 03:22:13.109889 | orchestrator | Friday 20 February 2026 03:22:09 +0000 (0:00:03.506) 0:00:24.319 ******* 2026-02-20 03:22:13.109927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 03:22:13.109962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 03:22:13.109976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 03:22:13.109994 | orchestrator | 2026-02-20 03:22:13.110005 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-20 03:22:13.110105 | orchestrator | Friday 20 February 2026 03:22:12 +0000 (0:00:03.206) 0:00:27.526 ******* 2026-02-20 03:22:13.110122 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:22:13.110138 | orchestrator | 2026-02-20 03:22:13.110214 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-20 03:22:27.358313 | orchestrator | Friday 20 February 2026 03:22:13 +0000 (0:00:00.678) 0:00:28.204 ******* 2026-02-20 03:22:27.358424 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:22:27.358439 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:22:27.358450 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:22:27.358460 | orchestrator | 2026-02-20 03:22:27.358471 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-20 03:22:27.358481 | orchestrator | Friday 20 February 2026 03:22:16 +0000 (0:00:03.487) 0:00:31.692 ******* 2026-02-20 03:22:27.358503 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-20 03:22:27.358515 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-20 03:22:27.358525 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-20 03:22:27.358535 | orchestrator | 2026-02-20 03:22:27.358545 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-20 03:22:27.358555 | orchestrator | Friday 20 February 2026 03:22:18 +0000 (0:00:01.448) 0:00:33.140 ******* 2026-02-20 03:22:27.358565 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-20 03:22:27.358575 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-20 03:22:27.358584 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-20 03:22:27.358594 | orchestrator | 2026-02-20 03:22:27.358604 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-20 03:22:27.358614 | orchestrator | Friday 20 February 2026 03:22:19 +0000 (0:00:01.294) 0:00:34.435 ******* 2026-02-20 03:22:27.358624 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:22:27.358635 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:22:27.358645 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:22:27.358655 | orchestrator | 2026-02-20 03:22:27.358665 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-20 03:22:27.358674 | orchestrator | Friday 20 February 2026 03:22:19 +0000 (0:00:00.630) 0:00:35.065 ******* 2026-02-20 03:22:27.358699 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:22:27.358709 | orchestrator | 2026-02-20 03:22:27.358719 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-20 03:22:27.358729 | orchestrator | Friday 20 February 2026 03:22:20 +0000 (0:00:00.104) 0:00:35.170 ******* 2026-02-20 03:22:27.358739 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:22:27.358748 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:22:27.358758 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:22:27.358791 | orchestrator | 2026-02-20 03:22:27.358801 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-20 03:22:27.358811 | orchestrator | Friday 20 February 2026 03:22:20 +0000 (0:00:00.246) 0:00:35.416 ******* 2026-02-20 03:22:27.358821 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:22:27.358831 | orchestrator | 2026-02-20 03:22:27.358840 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-20 03:22:27.358851 | orchestrator | Friday 20 February 2026 03:22:20 +0000 (0:00:00.586) 0:00:36.002 ******* 2026-02-20 03:22:27.358869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 03:22:27.358908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 03:22:27.358932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 03:22:27.358945 | orchestrator | 2026-02-20 03:22:27.358956 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-20 03:22:27.358968 | orchestrator | Friday 20 February 2026 03:22:24 +0000 (0:00:03.662) 0:00:39.665 ******* 2026-02-20 03:22:27.358990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-20 03:22:30.470767 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:22:30.470923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-20 03:22:30.470967 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:22:30.470981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-20 03:22:30.471082 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:22:30.471101 | orchestrator | 2026-02-20 03:22:30.471114 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-20 03:22:30.471128 | orchestrator | Friday 20 February 2026 03:22:27 +0000 (0:00:02.789) 0:00:42.454 ******* 2026-02-20 03:22:30.471168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-20 03:22:30.471195 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:22:30.471216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-20 03:22:30.471236 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:22:30.471275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-20 03:22:59.754646 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:22:59.754812 | orchestrator | 2026-02-20 03:22:59.754833 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-20 03:22:59.754847 | orchestrator | Friday 20 February 2026 03:22:30 +0000 (0:00:03.112) 0:00:45.566 ******* 2026-02-20 03:22:59.754859 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:22:59.754870 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:22:59.754881 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:22:59.754892 | orchestrator | 2026-02-20 03:22:59.754903 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-20 03:22:59.754914 | orchestrator | Friday 20 February 2026 03:22:33 +0000 (0:00:02.760) 0:00:48.327 ******* 2026-02-20 03:22:59.755184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 03:22:59.755240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 03:22:59.755399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 03:22:59.755430 | orchestrator | 2026-02-20 03:22:59.755449 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-20 03:22:59.755469 | orchestrator | Friday 20 February 2026 03:22:36 +0000 (0:00:03.342) 0:00:51.669 ******* 2026-02-20 03:22:59.755489 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:22:59.755506 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:22:59.755523 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:22:59.755540 | orchestrator | 2026-02-20 03:22:59.755558 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-20 03:22:59.755577 | orchestrator | Friday 20 February 2026 03:22:41 +0000 (0:00:04.896) 0:00:56.566 ******* 2026-02-20 03:22:59.755596 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:22:59.755614 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:22:59.755634 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:22:59.755653 | orchestrator | 2026-02-20 03:22:59.755671 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-20 03:22:59.755690 | orchestrator | Friday 20 February 2026 03:22:44 +0000 (0:00:03.116) 0:00:59.683 ******* 2026-02-20 03:22:59.755708 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:22:59.755725 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:22:59.755744 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:22:59.755763 | orchestrator | 2026-02-20 03:22:59.755782 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-20 03:22:59.755801 | orchestrator | Friday 20 February 2026 03:22:47 +0000 (0:00:02.987) 0:01:02.670 ******* 2026-02-20 03:22:59.755835 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:22:59.755854 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:22:59.755872 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:22:59.755890 | orchestrator | 2026-02-20 03:22:59.755909 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-20 03:22:59.755969 | orchestrator | Friday 20 February 2026 03:22:50 +0000 (0:00:02.720) 0:01:05.391 ******* 2026-02-20 03:22:59.755988 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:22:59.756007 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:22:59.756026 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:22:59.756046 | orchestrator | 2026-02-20 03:22:59.756064 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-20 03:22:59.756083 | orchestrator | Friday 20 February 2026 03:22:53 +0000 (0:00:02.783) 0:01:08.174 ******* 2026-02-20 03:22:59.756100 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:22:59.756119 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:22:59.756138 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:22:59.756157 | orchestrator | 2026-02-20 03:22:59.756175 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-20 03:22:59.756193 | orchestrator | Friday 20 February 2026 03:22:53 +0000 (0:00:00.335) 0:01:08.509 ******* 2026-02-20 03:22:59.756212 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-20 03:22:59.756231 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:22:59.756250 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-20 03:22:59.756270 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:22:59.756288 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-20 03:22:59.756306 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:22:59.756325 | orchestrator | 2026-02-20 03:22:59.756410 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-20 03:22:59.756431 | orchestrator | Friday 20 February 2026 03:22:55 +0000 (0:00:02.540) 0:01:11.049 ******* 2026-02-20 03:22:59.756449 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:22:59.756476 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:22:59.756496 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:22:59.756515 | orchestrator | 2026-02-20 03:22:59.756534 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-20 03:22:59.756566 | orchestrator | Friday 20 February 2026 03:22:59 +0000 (0:00:03.795) 0:01:14.845 ******* 2026-02-20 03:24:09.140474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 03:24:09.140673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 03:24:09.140740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 03:24:09.140757 | orchestrator | 2026-02-20 03:24:09.140773 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-20 03:24:09.140788 | orchestrator | Friday 20 February 2026 03:23:02 +0000 (0:00:03.229) 0:01:18.075 ******* 2026-02-20 03:24:09.140825 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:24:09.140852 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:24:09.140864 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:24:09.140875 | orchestrator | 2026-02-20 03:24:09.140887 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-20 03:24:09.140899 | orchestrator | Friday 20 February 2026 03:23:03 +0000 (0:00:00.388) 0:01:18.464 ******* 2026-02-20 03:24:09.140910 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:24:09.140922 | orchestrator | 2026-02-20 03:24:09.140934 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-20 03:24:09.140948 | orchestrator | Friday 20 February 2026 03:23:05 +0000 (0:00:02.084) 0:01:20.549 ******* 2026-02-20 03:24:09.140962 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:24:09.140976 | orchestrator | 2026-02-20 03:24:09.140988 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-20 03:24:09.141000 | orchestrator | Friday 20 February 2026 03:23:07 +0000 (0:00:02.099) 0:01:22.649 ******* 2026-02-20 03:24:09.141013 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:24:09.141027 | orchestrator | 2026-02-20 03:24:09.141041 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-20 03:24:09.141056 | orchestrator | Friday 20 February 2026 03:23:09 +0000 (0:00:01.971) 0:01:24.620 ******* 2026-02-20 03:24:09.141073 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:24:09.141089 | orchestrator | 2026-02-20 03:24:09.141105 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-20 03:24:09.141123 | orchestrator | Friday 20 February 2026 03:23:36 +0000 (0:00:26.992) 0:01:51.612 ******* 2026-02-20 03:24:09.141137 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:24:09.141150 | orchestrator | 2026-02-20 03:24:09.141165 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-20 03:24:09.141177 | orchestrator | Friday 20 February 2026 03:23:38 +0000 (0:00:01.988) 0:01:53.600 ******* 2026-02-20 03:24:09.141192 | orchestrator | 2026-02-20 03:24:09.141207 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-20 03:24:09.141225 | orchestrator | Friday 20 February 2026 03:23:38 +0000 (0:00:00.067) 0:01:53.668 ******* 2026-02-20 03:24:09.141241 | orchestrator | 2026-02-20 03:24:09.141256 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-20 03:24:09.141273 | orchestrator | Friday 20 February 2026 03:23:38 +0000 (0:00:00.082) 0:01:53.751 ******* 2026-02-20 03:24:09.141288 | orchestrator | 2026-02-20 03:24:09.141304 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-20 03:24:09.141317 | orchestrator | Friday 20 February 2026 03:23:38 +0000 (0:00:00.067) 0:01:53.818 ******* 2026-02-20 03:24:09.141330 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:24:09.141343 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:24:09.141356 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:24:09.141369 | orchestrator | 2026-02-20 03:24:09.141382 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:24:09.141394 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-20 03:24:09.141408 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-20 03:24:09.141420 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-20 03:24:09.141431 | orchestrator | 2026-02-20 03:24:09.141443 | orchestrator | 2026-02-20 03:24:09.141455 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:24:09.141466 | orchestrator | Friday 20 February 2026 03:24:09 +0000 (0:00:30.407) 0:02:24.226 ******* 2026-02-20 03:24:09.141478 | orchestrator | =============================================================================== 2026-02-20 03:24:09.141498 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.41s 2026-02-20 03:24:09.141552 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.99s 2026-02-20 03:24:09.141567 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.15s 2026-02-20 03:24:09.141593 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 4.90s 2026-02-20 03:24:09.419912 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.93s 2026-02-20 03:24:09.420013 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.80s 2026-02-20 03:24:09.420027 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.66s 2026-02-20 03:24:09.420038 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.51s 2026-02-20 03:24:09.420049 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.49s 2026-02-20 03:24:09.420060 | orchestrator | glance : Copying over config.json files for services -------------------- 3.34s 2026-02-20 03:24:09.420071 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.26s 2026-02-20 03:24:09.420081 | orchestrator | glance : Check glance containers ---------------------------------------- 3.23s 2026-02-20 03:24:09.420092 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.21s 2026-02-20 03:24:09.420103 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.15s 2026-02-20 03:24:09.420114 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.12s 2026-02-20 03:24:09.420124 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.11s 2026-02-20 03:24:09.420135 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.09s 2026-02-20 03:24:09.420146 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 2.99s 2026-02-20 03:24:09.420156 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 2.79s 2026-02-20 03:24:09.420168 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 2.78s 2026-02-20 03:24:11.636280 | orchestrator | 2026-02-20 03:24:11 | INFO  | Task 381cc83b-e463-4f85-be0f-bc2f7cb54ef6 (cinder) was prepared for execution. 2026-02-20 03:24:11.636397 | orchestrator | 2026-02-20 03:24:11 | INFO  | It takes a moment until task 381cc83b-e463-4f85-be0f-bc2f7cb54ef6 (cinder) has been started and output is visible here. 2026-02-20 03:24:45.023944 | orchestrator | 2026-02-20 03:24:45.024083 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:24:45.024102 | orchestrator | 2026-02-20 03:24:45.024115 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:24:45.024127 | orchestrator | Friday 20 February 2026 03:24:15 +0000 (0:00:00.188) 0:00:00.188 ******* 2026-02-20 03:24:45.024138 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:24:45.024150 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:24:45.024161 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:24:45.024175 | orchestrator | 2026-02-20 03:24:45.024193 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:24:45.024210 | orchestrator | Friday 20 February 2026 03:24:15 +0000 (0:00:00.226) 0:00:00.415 ******* 2026-02-20 03:24:45.024239 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-20 03:24:45.024261 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-20 03:24:45.024278 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-20 03:24:45.024294 | orchestrator | 2026-02-20 03:24:45.024311 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-20 03:24:45.024328 | orchestrator | 2026-02-20 03:24:45.024345 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-20 03:24:45.024363 | orchestrator | Friday 20 February 2026 03:24:15 +0000 (0:00:00.311) 0:00:00.726 ******* 2026-02-20 03:24:45.024381 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:24:45.024433 | orchestrator | 2026-02-20 03:24:45.024454 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-20 03:24:45.024474 | orchestrator | Friday 20 February 2026 03:24:16 +0000 (0:00:00.442) 0:00:01.168 ******* 2026-02-20 03:24:45.024493 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-20 03:24:45.024510 | orchestrator | 2026-02-20 03:24:45.024530 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-20 03:24:45.024547 | orchestrator | Friday 20 February 2026 03:24:19 +0000 (0:00:03.299) 0:00:04.468 ******* 2026-02-20 03:24:45.024566 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-20 03:24:45.024585 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-20 03:24:45.024602 | orchestrator | 2026-02-20 03:24:45.024619 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-20 03:24:45.024635 | orchestrator | Friday 20 February 2026 03:24:25 +0000 (0:00:06.292) 0:00:10.761 ******* 2026-02-20 03:24:45.024651 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-20 03:24:45.024668 | orchestrator | 2026-02-20 03:24:45.024686 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-20 03:24:45.024704 | orchestrator | Friday 20 February 2026 03:24:29 +0000 (0:00:03.082) 0:00:13.843 ******* 2026-02-20 03:24:45.024723 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-20 03:24:45.024834 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-20 03:24:45.024857 | orchestrator | 2026-02-20 03:24:45.024893 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-20 03:24:45.024905 | orchestrator | Friday 20 February 2026 03:24:33 +0000 (0:00:04.108) 0:00:17.952 ******* 2026-02-20 03:24:45.024916 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-20 03:24:45.024928 | orchestrator | 2026-02-20 03:24:45.024938 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-20 03:24:45.024949 | orchestrator | Friday 20 February 2026 03:24:36 +0000 (0:00:03.001) 0:00:20.954 ******* 2026-02-20 03:24:45.024960 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-20 03:24:45.024971 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-20 03:24:45.024982 | orchestrator | 2026-02-20 03:24:45.024992 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-20 03:24:45.025003 | orchestrator | Friday 20 February 2026 03:24:43 +0000 (0:00:06.931) 0:00:27.886 ******* 2026-02-20 03:24:45.025018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 03:24:45.025058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 03:24:45.025088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 03:24:45.025107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:24:45.025149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:24:45.025169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:24:45.025188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-20 03:24:45.025234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-20 03:24:50.743046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-20 03:24:50.743176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-20 03:24:50.743212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-20 03:24:50.743231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-20 03:24:50.743248 | orchestrator | 2026-02-20 03:24:50.743266 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-20 03:24:50.743284 | orchestrator | Friday 20 February 2026 03:24:45 +0000 (0:00:02.002) 0:00:29.888 ******* 2026-02-20 03:24:50.743300 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:24:50.743317 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:24:50.743360 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:24:50.743377 | orchestrator | 2026-02-20 03:24:50.743393 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-20 03:24:50.743411 | orchestrator | Friday 20 February 2026 03:24:45 +0000 (0:00:00.456) 0:00:30.345 ******* 2026-02-20 03:24:50.743428 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:24:50.743446 | orchestrator | 2026-02-20 03:24:50.743461 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-20 03:24:50.743480 | orchestrator | Friday 20 February 2026 03:24:46 +0000 (0:00:00.510) 0:00:30.855 ******* 2026-02-20 03:24:50.743497 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-20 03:24:50.743515 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-20 03:24:50.743528 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-20 03:24:50.743541 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-20 03:24:50.743559 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-20 03:24:50.743576 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-20 03:24:50.743595 | orchestrator | 2026-02-20 03:24:50.743612 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-20 03:24:50.743631 | orchestrator | Friday 20 February 2026 03:24:47 +0000 (0:00:01.740) 0:00:32.595 ******* 2026-02-20 03:24:50.743676 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-20 03:24:50.743712 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-20 03:24:50.743758 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-20 03:24:50.743833 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-20 03:24:50.743866 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-20 03:25:01.467987 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-20 03:25:01.468119 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-20 03:25:01.468961 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-20 03:25:01.469002 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-20 03:25:01.469012 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-20 03:25:01.469038 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-20 03:25:01.469047 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-20 03:25:01.469056 | orchestrator | 2026-02-20 03:25:01.469066 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-20 03:25:01.469076 | orchestrator | Friday 20 February 2026 03:24:51 +0000 (0:00:03.219) 0:00:35.815 ******* 2026-02-20 03:25:01.469090 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-20 03:25:01.469100 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-20 03:25:01.469108 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-20 03:25:01.469117 | orchestrator | 2026-02-20 03:25:01.469125 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-20 03:25:01.469133 | orchestrator | Friday 20 February 2026 03:24:52 +0000 (0:00:01.577) 0:00:37.392 ******* 2026-02-20 03:25:01.469148 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-20 03:25:01.469157 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-20 03:25:01.469165 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-20 03:25:01.469173 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-20 03:25:01.469181 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-20 03:25:01.469189 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-20 03:25:01.469197 | orchestrator | 2026-02-20 03:25:01.469205 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-20 03:25:01.469213 | orchestrator | Friday 20 February 2026 03:24:55 +0000 (0:00:02.683) 0:00:40.075 ******* 2026-02-20 03:25:01.469222 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-20 03:25:01.469231 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-20 03:25:01.469239 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-20 03:25:01.469247 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-20 03:25:01.469254 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-20 03:25:01.469262 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-20 03:25:01.469270 | orchestrator | 2026-02-20 03:25:01.469278 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-20 03:25:01.469286 | orchestrator | Friday 20 February 2026 03:24:56 +0000 (0:00:01.010) 0:00:41.086 ******* 2026-02-20 03:25:01.469295 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:25:01.469303 | orchestrator | 2026-02-20 03:25:01.469311 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-20 03:25:01.469319 | orchestrator | Friday 20 February 2026 03:24:56 +0000 (0:00:00.130) 0:00:41.217 ******* 2026-02-20 03:25:01.469327 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:25:01.469335 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:25:01.469343 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:25:01.469350 | orchestrator | 2026-02-20 03:25:01.469363 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-20 03:25:01.469377 | orchestrator | Friday 20 February 2026 03:24:56 +0000 (0:00:00.455) 0:00:41.672 ******* 2026-02-20 03:25:01.469393 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:25:01.469407 | orchestrator | 2026-02-20 03:25:01.469420 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-20 03:25:01.469434 | orchestrator | Friday 20 February 2026 03:24:57 +0000 (0:00:00.528) 0:00:42.200 ******* 2026-02-20 03:25:01.469459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 03:25:02.319177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 03:25:02.319302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 03:25:02.319318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:02.319331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:02.319343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:02.319372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:02.319400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:02.319413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:02.319425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:02.319437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:02.319448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:02.319460 | orchestrator | 2026-02-20 03:25:02.319482 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-20 03:25:02.319502 | orchestrator | Friday 20 February 2026 03:25:01 +0000 (0:00:04.134) 0:00:46.335 ******* 2026-02-20 03:25:02.319533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-20 03:25:02.420555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:25:02.420654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 03:25:02.420669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 03:25:02.420682 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:25:02.420696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-20 03:25:02.420752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:25:02.420866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 03:25:02.420882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 03:25:02.420893 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:25:02.420905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-20 03:25:02.420917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:25:02.420928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 03:25:02.420947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 03:25:02.420959 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:25:02.420970 | orchestrator | 2026-02-20 03:25:02.420982 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-20 03:25:02.421007 | orchestrator | Friday 20 February 2026 03:25:02 +0000 (0:00:00.864) 0:00:47.199 ******* 2026-02-20 03:25:02.954497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-20 03:25:02.954589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:25:02.954604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 03:25:02.954614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 03:25:02.954652 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:25:02.954664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-20 03:25:02.954779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:25:02.954804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 03:25:02.954820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 03:25:02.954835 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:25:02.954852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-20 03:25:02.954882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:25:02.954919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 03:25:07.523171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 03:25:07.523289 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:25:07.523307 | orchestrator | 2026-02-20 03:25:07.523320 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-20 03:25:07.523333 | orchestrator | Friday 20 February 2026 03:25:03 +0000 (0:00:00.837) 0:00:48.037 ******* 2026-02-20 03:25:07.523347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 03:25:07.523360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 03:25:07.523395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 03:25:07.523440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:07.523459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:07.523479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:07.523497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:07.523527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:07.523547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:07.523579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:19.673281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:19.673403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:19.673421 | orchestrator | 2026-02-20 03:25:19.673436 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-20 03:25:19.673449 | orchestrator | Friday 20 February 2026 03:25:07 +0000 (0:00:04.351) 0:00:52.388 ******* 2026-02-20 03:25:19.673487 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-20 03:25:19.673500 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-20 03:25:19.673510 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-20 03:25:19.673521 | orchestrator | 2026-02-20 03:25:19.673532 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-20 03:25:19.673543 | orchestrator | Friday 20 February 2026 03:25:09 +0000 (0:00:01.864) 0:00:54.253 ******* 2026-02-20 03:25:19.673555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 03:25:19.673569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 03:25:19.673613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 03:25:19.673627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:19.673648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:19.673660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:19.673672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:19.673718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:19.673789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:22.067342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:22.067481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:22.067503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:22.067519 | orchestrator | 2026-02-20 03:25:22.067535 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-20 03:25:22.067549 | orchestrator | Friday 20 February 2026 03:25:19 +0000 (0:00:10.284) 0:01:04.538 ******* 2026-02-20 03:25:22.067563 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:25:22.067577 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:25:22.067590 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:25:22.067604 | orchestrator | 2026-02-20 03:25:22.067617 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-20 03:25:22.067631 | orchestrator | Friday 20 February 2026 03:25:21 +0000 (0:00:01.570) 0:01:06.109 ******* 2026-02-20 03:25:22.067663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-20 03:25:22.067752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:25:22.067792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 03:25:22.067822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 03:25:22.067837 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:25:22.067851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-20 03:25:22.067866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:25:22.067888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 03:25:22.067913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 03:25:25.618955 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:25:25.619071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-20 03:25:25.619090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:25:25.619103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 03:25:25.619115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 03:25:25.619143 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:25:25.619156 | orchestrator | 2026-02-20 03:25:25.619168 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-20 03:25:25.619180 | orchestrator | Friday 20 February 2026 03:25:22 +0000 (0:00:00.826) 0:01:06.936 ******* 2026-02-20 03:25:25.619191 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:25:25.619202 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:25:25.619213 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:25:25.619243 | orchestrator | 2026-02-20 03:25:25.619255 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-20 03:25:25.619266 | orchestrator | Friday 20 February 2026 03:25:22 +0000 (0:00:00.502) 0:01:07.438 ******* 2026-02-20 03:25:25.619294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 03:25:25.619308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 03:25:25.619320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-20 03:25:25.619332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:25.619349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:25.619369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:25:25.619389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:08.288012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:08.288131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:08.288148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:08.288175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:08.288209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:08.288222 | orchestrator | 2026-02-20 03:27:08.288235 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-20 03:27:08.288248 | orchestrator | Friday 20 February 2026 03:25:25 +0000 (0:00:03.043) 0:01:10.482 ******* 2026-02-20 03:27:08.288259 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:27:08.288272 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:27:08.288283 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:27:08.288294 | orchestrator | 2026-02-20 03:27:08.288305 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-20 03:27:08.288315 | orchestrator | Friday 20 February 2026 03:25:25 +0000 (0:00:00.274) 0:01:10.756 ******* 2026-02-20 03:27:08.288326 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:27:08.288337 | orchestrator | 2026-02-20 03:27:08.288365 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-20 03:27:08.288376 | orchestrator | Friday 20 February 2026 03:25:27 +0000 (0:00:02.028) 0:01:12.784 ******* 2026-02-20 03:27:08.288387 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:27:08.288398 | orchestrator | 2026-02-20 03:27:08.288408 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-20 03:27:08.288419 | orchestrator | Friday 20 February 2026 03:25:30 +0000 (0:00:02.090) 0:01:14.875 ******* 2026-02-20 03:27:08.288430 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:27:08.288440 | orchestrator | 2026-02-20 03:27:08.288451 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-20 03:27:08.288462 | orchestrator | Friday 20 February 2026 03:25:49 +0000 (0:00:19.103) 0:01:33.978 ******* 2026-02-20 03:27:08.288473 | orchestrator | 2026-02-20 03:27:08.288483 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-20 03:27:08.288494 | orchestrator | Friday 20 February 2026 03:25:49 +0000 (0:00:00.068) 0:01:34.047 ******* 2026-02-20 03:27:08.288505 | orchestrator | 2026-02-20 03:27:08.288516 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-20 03:27:08.288526 | orchestrator | Friday 20 February 2026 03:25:49 +0000 (0:00:00.068) 0:01:34.116 ******* 2026-02-20 03:27:08.288537 | orchestrator | 2026-02-20 03:27:08.288580 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-20 03:27:08.288600 | orchestrator | Friday 20 February 2026 03:25:49 +0000 (0:00:00.071) 0:01:34.187 ******* 2026-02-20 03:27:08.288619 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:27:08.288638 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:27:08.288659 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:27:08.288677 | orchestrator | 2026-02-20 03:27:08.288696 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-20 03:27:08.288709 | orchestrator | Friday 20 February 2026 03:26:17 +0000 (0:00:28.416) 0:02:02.604 ******* 2026-02-20 03:27:08.288730 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:27:08.288743 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:27:08.288755 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:27:08.288768 | orchestrator | 2026-02-20 03:27:08.288781 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-20 03:27:08.288793 | orchestrator | Friday 20 February 2026 03:26:28 +0000 (0:00:10.265) 0:02:12.870 ******* 2026-02-20 03:27:08.288805 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:27:08.288818 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:27:08.288830 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:27:08.288842 | orchestrator | 2026-02-20 03:27:08.288855 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-20 03:27:08.288868 | orchestrator | Friday 20 February 2026 03:26:56 +0000 (0:00:28.626) 0:02:41.496 ******* 2026-02-20 03:27:08.288880 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:27:08.288893 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:27:08.288910 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:27:08.288928 | orchestrator | 2026-02-20 03:27:08.288939 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-20 03:27:08.288951 | orchestrator | Friday 20 February 2026 03:27:08 +0000 (0:00:11.294) 0:02:52.790 ******* 2026-02-20 03:27:08.288962 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:27:08.288973 | orchestrator | 2026-02-20 03:27:08.288984 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:27:08.288995 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-20 03:27:08.289014 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-20 03:27:08.289025 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-20 03:27:08.289036 | orchestrator | 2026-02-20 03:27:08.289046 | orchestrator | 2026-02-20 03:27:08.289057 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:27:08.289068 | orchestrator | Friday 20 February 2026 03:27:08 +0000 (0:00:00.259) 0:02:53.050 ******* 2026-02-20 03:27:08.289078 | orchestrator | =============================================================================== 2026-02-20 03:27:08.289089 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 28.63s 2026-02-20 03:27:08.289100 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 28.42s 2026-02-20 03:27:08.289110 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.10s 2026-02-20 03:27:08.289121 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.29s 2026-02-20 03:27:08.289131 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.28s 2026-02-20 03:27:08.289142 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.27s 2026-02-20 03:27:08.289152 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.93s 2026-02-20 03:27:08.289163 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.29s 2026-02-20 03:27:08.289173 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.35s 2026-02-20 03:27:08.289184 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.13s 2026-02-20 03:27:08.289194 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.11s 2026-02-20 03:27:08.289205 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.30s 2026-02-20 03:27:08.289215 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.22s 2026-02-20 03:27:08.289226 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.08s 2026-02-20 03:27:08.289245 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.04s 2026-02-20 03:27:08.598673 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.00s 2026-02-20 03:27:08.598799 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.68s 2026-02-20 03:27:08.598825 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.09s 2026-02-20 03:27:08.598846 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.03s 2026-02-20 03:27:08.598865 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.00s 2026-02-20 03:27:10.781731 | orchestrator | 2026-02-20 03:27:10 | INFO  | Task 5a9cbee2-25f0-4bcc-8583-0cb44149d481 (barbican) was prepared for execution. 2026-02-20 03:27:10.781799 | orchestrator | 2026-02-20 03:27:10 | INFO  | It takes a moment until task 5a9cbee2-25f0-4bcc-8583-0cb44149d481 (barbican) has been started and output is visible here. 2026-02-20 03:27:52.223043 | orchestrator | 2026-02-20 03:27:52.223169 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:27:52.223186 | orchestrator | 2026-02-20 03:27:52.223199 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:27:52.223210 | orchestrator | Friday 20 February 2026 03:27:14 +0000 (0:00:00.189) 0:00:00.189 ******* 2026-02-20 03:27:52.223221 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:27:52.223234 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:27:52.223245 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:27:52.223256 | orchestrator | 2026-02-20 03:27:52.223267 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:27:52.223277 | orchestrator | Friday 20 February 2026 03:27:14 +0000 (0:00:00.218) 0:00:00.407 ******* 2026-02-20 03:27:52.223288 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-20 03:27:52.223300 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-20 03:27:52.223311 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-20 03:27:52.223321 | orchestrator | 2026-02-20 03:27:52.223332 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-20 03:27:52.223343 | orchestrator | 2026-02-20 03:27:52.223354 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-20 03:27:52.223365 | orchestrator | Friday 20 February 2026 03:27:14 +0000 (0:00:00.364) 0:00:00.772 ******* 2026-02-20 03:27:52.223376 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:27:52.223387 | orchestrator | 2026-02-20 03:27:52.223398 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-20 03:27:52.223409 | orchestrator | Friday 20 February 2026 03:27:15 +0000 (0:00:00.394) 0:00:01.166 ******* 2026-02-20 03:27:52.223420 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-20 03:27:52.223431 | orchestrator | 2026-02-20 03:27:52.223441 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-20 03:27:52.223452 | orchestrator | Friday 20 February 2026 03:27:18 +0000 (0:00:03.397) 0:00:04.564 ******* 2026-02-20 03:27:52.223463 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-20 03:27:52.223474 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-20 03:27:52.223484 | orchestrator | 2026-02-20 03:27:52.223511 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-20 03:27:52.223522 | orchestrator | Friday 20 February 2026 03:27:25 +0000 (0:00:06.341) 0:00:10.906 ******* 2026-02-20 03:27:52.223565 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-20 03:27:52.223578 | orchestrator | 2026-02-20 03:27:52.223591 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-20 03:27:52.223603 | orchestrator | Friday 20 February 2026 03:27:28 +0000 (0:00:03.111) 0:00:14.017 ******* 2026-02-20 03:27:52.223616 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-20 03:27:52.223653 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-20 03:27:52.223666 | orchestrator | 2026-02-20 03:27:52.223679 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-20 03:27:52.223691 | orchestrator | Friday 20 February 2026 03:27:32 +0000 (0:00:03.901) 0:00:17.919 ******* 2026-02-20 03:27:52.223704 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-20 03:27:52.223716 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-20 03:27:52.223727 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-20 03:27:52.223738 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-20 03:27:52.223748 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-20 03:27:52.223759 | orchestrator | 2026-02-20 03:27:52.223770 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-20 03:27:52.223780 | orchestrator | Friday 20 February 2026 03:27:47 +0000 (0:00:15.057) 0:00:32.976 ******* 2026-02-20 03:27:52.223791 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-20 03:27:52.223802 | orchestrator | 2026-02-20 03:27:52.223812 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-20 03:27:52.223823 | orchestrator | Friday 20 February 2026 03:27:50 +0000 (0:00:03.539) 0:00:36.516 ******* 2026-02-20 03:27:52.223837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 03:27:52.223873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 03:27:52.223892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 03:27:52.223914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:52.223928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:52.223940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:52.223960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:57.716420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:57.716592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:57.716636 | orchestrator | 2026-02-20 03:27:57.716650 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-20 03:27:57.716664 | orchestrator | Friday 20 February 2026 03:27:52 +0000 (0:00:01.591) 0:00:38.107 ******* 2026-02-20 03:27:57.716690 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-20 03:27:57.716702 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-20 03:27:57.716713 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-20 03:27:57.716724 | orchestrator | 2026-02-20 03:27:57.716735 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-20 03:27:57.716746 | orchestrator | Friday 20 February 2026 03:27:53 +0000 (0:00:01.043) 0:00:39.151 ******* 2026-02-20 03:27:57.716757 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:27:57.716769 | orchestrator | 2026-02-20 03:27:57.716780 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-20 03:27:57.716791 | orchestrator | Friday 20 February 2026 03:27:53 +0000 (0:00:00.276) 0:00:39.428 ******* 2026-02-20 03:27:57.716802 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:27:57.716813 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:27:57.716824 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:27:57.716834 | orchestrator | 2026-02-20 03:27:57.716845 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-20 03:27:57.716856 | orchestrator | Friday 20 February 2026 03:27:53 +0000 (0:00:00.286) 0:00:39.714 ******* 2026-02-20 03:27:57.716868 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:27:57.716879 | orchestrator | 2026-02-20 03:27:57.716890 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-20 03:27:57.716901 | orchestrator | Friday 20 February 2026 03:27:54 +0000 (0:00:00.503) 0:00:40.218 ******* 2026-02-20 03:27:57.716915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 03:27:57.716946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 03:27:57.716959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 03:27:57.716984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:57.716998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:57.717010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:57.717022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:57.717042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:58.996737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:27:58.996872 | orchestrator | 2026-02-20 03:27:58.996903 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-20 03:27:58.996924 | orchestrator | Friday 20 February 2026 03:27:57 +0000 (0:00:03.388) 0:00:43.606 ******* 2026-02-20 03:27:58.996968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-20 03:27:58.996993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:27:58.997014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:27:58.997035 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:27:58.997056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-20 03:27:58.997120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:27:58.997146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:27:58.997165 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:27:58.997184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-20 03:27:58.997202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:27:58.997220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:27:58.997236 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:27:58.997263 | orchestrator | 2026-02-20 03:27:58.997281 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-20 03:27:58.997299 | orchestrator | Friday 20 February 2026 03:27:58 +0000 (0:00:00.535) 0:00:44.142 ******* 2026-02-20 03:27:58.997329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-20 03:28:02.502399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:28:02.502502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:28:02.502578 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:28:02.502595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-20 03:28:02.502606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:28:02.502639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:28:02.502650 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:28:02.502679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-20 03:28:02.502697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:28:02.502708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:28:02.502718 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:28:02.502728 | orchestrator | 2026-02-20 03:28:02.502739 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-20 03:28:02.502779 | orchestrator | Friday 20 February 2026 03:27:58 +0000 (0:00:00.751) 0:00:44.893 ******* 2026-02-20 03:28:02.502789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 03:28:02.502809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 03:28:02.502833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 03:28:11.668003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:11.668122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:11.668138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:11.668172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:11.668186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:11.668197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:11.668209 | orchestrator | 2026-02-20 03:28:11.668222 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-20 03:28:11.668235 | orchestrator | Friday 20 February 2026 03:28:02 +0000 (0:00:03.501) 0:00:48.395 ******* 2026-02-20 03:28:11.668246 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:28:11.668258 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:28:11.668269 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:28:11.668279 | orchestrator | 2026-02-20 03:28:11.668308 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-20 03:28:11.668320 | orchestrator | Friday 20 February 2026 03:28:04 +0000 (0:00:01.554) 0:00:49.949 ******* 2026-02-20 03:28:11.668331 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 03:28:11.668342 | orchestrator | 2026-02-20 03:28:11.668352 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-20 03:28:11.668363 | orchestrator | Friday 20 February 2026 03:28:04 +0000 (0:00:00.873) 0:00:50.823 ******* 2026-02-20 03:28:11.668374 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:28:11.668384 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:28:11.668395 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:28:11.668405 | orchestrator | 2026-02-20 03:28:11.668416 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-20 03:28:11.668427 | orchestrator | Friday 20 February 2026 03:28:05 +0000 (0:00:00.527) 0:00:51.351 ******* 2026-02-20 03:28:11.668485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 03:28:11.668509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 03:28:11.668568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 03:28:11.668609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:12.487946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:12.488072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:12.488130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:12.488152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:12.488170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:12.488189 | orchestrator | 2026-02-20 03:28:12.488210 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-20 03:28:12.488230 | orchestrator | Friday 20 February 2026 03:28:11 +0000 (0:00:06.214) 0:00:57.565 ******* 2026-02-20 03:28:12.488291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-20 03:28:12.488313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:28:12.488346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:28:12.488365 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:28:12.488385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-20 03:28:12.488403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:28:12.488420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:28:12.488438 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:28:12.488477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-20 03:28:14.845450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:28:14.845593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:28:14.845610 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:28:14.845622 | orchestrator | 2026-02-20 03:28:14.845632 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-20 03:28:14.845643 | orchestrator | Friday 20 February 2026 03:28:12 +0000 (0:00:00.817) 0:00:58.382 ******* 2026-02-20 03:28:14.845652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 03:28:14.845676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 03:28:14.845728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-20 03:28:14.845758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:14.845769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:14.845778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:14.845788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:14.845797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:14.845811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:28:14.845826 | orchestrator | 2026-02-20 03:28:14.845835 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-20 03:28:14.845850 | orchestrator | Friday 20 February 2026 03:28:14 +0000 (0:00:02.354) 0:01:00.737 ******* 2026-02-20 03:28:58.523352 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:28:58.523584 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:28:58.523666 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:28:58.523692 | orchestrator | 2026-02-20 03:28:58.523713 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-20 03:28:58.523734 | orchestrator | Friday 20 February 2026 03:28:15 +0000 (0:00:00.313) 0:01:01.051 ******* 2026-02-20 03:28:58.523752 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:28:58.523770 | orchestrator | 2026-02-20 03:28:58.523788 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-20 03:28:58.523806 | orchestrator | Friday 20 February 2026 03:28:17 +0000 (0:00:02.049) 0:01:03.101 ******* 2026-02-20 03:28:58.523825 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:28:58.523843 | orchestrator | 2026-02-20 03:28:58.523860 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-20 03:28:58.523877 | orchestrator | Friday 20 February 2026 03:28:19 +0000 (0:00:02.123) 0:01:05.224 ******* 2026-02-20 03:28:58.523896 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:28:58.523915 | orchestrator | 2026-02-20 03:28:58.523936 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-20 03:28:58.523954 | orchestrator | Friday 20 February 2026 03:28:31 +0000 (0:00:11.687) 0:01:16.911 ******* 2026-02-20 03:28:58.523974 | orchestrator | 2026-02-20 03:28:58.523992 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-20 03:28:58.524010 | orchestrator | Friday 20 February 2026 03:28:31 +0000 (0:00:00.070) 0:01:16.982 ******* 2026-02-20 03:28:58.524027 | orchestrator | 2026-02-20 03:28:58.524045 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-20 03:28:58.524062 | orchestrator | Friday 20 February 2026 03:28:31 +0000 (0:00:00.068) 0:01:17.050 ******* 2026-02-20 03:28:58.524080 | orchestrator | 2026-02-20 03:28:58.524098 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-20 03:28:58.524117 | orchestrator | Friday 20 February 2026 03:28:31 +0000 (0:00:00.068) 0:01:17.119 ******* 2026-02-20 03:28:58.524136 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:28:58.524155 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:28:58.524172 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:28:58.524190 | orchestrator | 2026-02-20 03:28:58.524209 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-20 03:28:58.524228 | orchestrator | Friday 20 February 2026 03:28:37 +0000 (0:00:06.467) 0:01:23.587 ******* 2026-02-20 03:28:58.524246 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:28:58.524263 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:28:58.524282 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:28:58.524299 | orchestrator | 2026-02-20 03:28:58.524317 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-20 03:28:58.524335 | orchestrator | Friday 20 February 2026 03:28:47 +0000 (0:00:09.998) 0:01:33.586 ******* 2026-02-20 03:28:58.524353 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:28:58.524370 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:28:58.524388 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:28:58.524406 | orchestrator | 2026-02-20 03:28:58.524425 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:28:58.524481 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-20 03:28:58.524533 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 03:28:58.524554 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 03:28:58.524573 | orchestrator | 2026-02-20 03:28:58.524591 | orchestrator | 2026-02-20 03:28:58.524609 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:28:58.524628 | orchestrator | Friday 20 February 2026 03:28:58 +0000 (0:00:10.526) 0:01:44.112 ******* 2026-02-20 03:28:58.524645 | orchestrator | =============================================================================== 2026-02-20 03:28:58.524664 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.06s 2026-02-20 03:28:58.524681 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.69s 2026-02-20 03:28:58.524699 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.53s 2026-02-20 03:28:58.524719 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.00s 2026-02-20 03:28:58.524739 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.47s 2026-02-20 03:28:58.524757 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.34s 2026-02-20 03:28:58.524776 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.21s 2026-02-20 03:28:58.524804 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.90s 2026-02-20 03:28:58.524816 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.54s 2026-02-20 03:28:58.524827 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.50s 2026-02-20 03:28:58.524837 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.40s 2026-02-20 03:28:58.524848 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.39s 2026-02-20 03:28:58.524865 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.11s 2026-02-20 03:28:58.524884 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.35s 2026-02-20 03:28:58.524902 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.12s 2026-02-20 03:28:58.524951 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.05s 2026-02-20 03:28:58.524970 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.59s 2026-02-20 03:28:58.524988 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.55s 2026-02-20 03:28:58.525004 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.04s 2026-02-20 03:28:58.525022 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.87s 2026-02-20 03:29:00.750964 | orchestrator | 2026-02-20 03:29:00 | INFO  | Task 4a82572e-f5dc-4116-b1c2-6670ea3f4fce (designate) was prepared for execution. 2026-02-20 03:29:00.751070 | orchestrator | 2026-02-20 03:29:00 | INFO  | It takes a moment until task 4a82572e-f5dc-4116-b1c2-6670ea3f4fce (designate) has been started and output is visible here. 2026-02-20 03:29:31.038369 | orchestrator | 2026-02-20 03:29:31.038536 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:29:31.038557 | orchestrator | 2026-02-20 03:29:31.038569 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:29:31.038580 | orchestrator | Friday 20 February 2026 03:29:04 +0000 (0:00:00.192) 0:00:00.192 ******* 2026-02-20 03:29:31.038592 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:29:31.038604 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:29:31.038640 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:29:31.038652 | orchestrator | 2026-02-20 03:29:31.038664 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:29:31.038675 | orchestrator | Friday 20 February 2026 03:29:04 +0000 (0:00:00.217) 0:00:00.409 ******* 2026-02-20 03:29:31.038686 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-20 03:29:31.038698 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-20 03:29:31.038709 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-20 03:29:31.038719 | orchestrator | 2026-02-20 03:29:31.038730 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-20 03:29:31.038741 | orchestrator | 2026-02-20 03:29:31.038752 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-20 03:29:31.038763 | orchestrator | Friday 20 February 2026 03:29:05 +0000 (0:00:00.314) 0:00:00.724 ******* 2026-02-20 03:29:31.038774 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:29:31.038786 | orchestrator | 2026-02-20 03:29:31.038797 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-20 03:29:31.038808 | orchestrator | Friday 20 February 2026 03:29:05 +0000 (0:00:00.410) 0:00:01.134 ******* 2026-02-20 03:29:31.038818 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-20 03:29:31.038829 | orchestrator | 2026-02-20 03:29:31.038840 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-20 03:29:31.038851 | orchestrator | Friday 20 February 2026 03:29:08 +0000 (0:00:03.414) 0:00:04.548 ******* 2026-02-20 03:29:31.038862 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-20 03:29:31.038874 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-20 03:29:31.038884 | orchestrator | 2026-02-20 03:29:31.038895 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-20 03:29:31.038909 | orchestrator | Friday 20 February 2026 03:29:15 +0000 (0:00:06.237) 0:00:10.786 ******* 2026-02-20 03:29:31.038922 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-20 03:29:31.038934 | orchestrator | 2026-02-20 03:29:31.038946 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-20 03:29:31.038959 | orchestrator | Friday 20 February 2026 03:29:18 +0000 (0:00:03.121) 0:00:13.907 ******* 2026-02-20 03:29:31.038972 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-20 03:29:31.038986 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-20 03:29:31.039000 | orchestrator | 2026-02-20 03:29:31.039012 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-20 03:29:31.039025 | orchestrator | Friday 20 February 2026 03:29:22 +0000 (0:00:03.939) 0:00:17.846 ******* 2026-02-20 03:29:31.039037 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-20 03:29:31.039049 | orchestrator | 2026-02-20 03:29:31.039061 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-20 03:29:31.039074 | orchestrator | Friday 20 February 2026 03:29:25 +0000 (0:00:03.177) 0:00:21.023 ******* 2026-02-20 03:29:31.039086 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-20 03:29:31.039106 | orchestrator | 2026-02-20 03:29:31.039126 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-20 03:29:31.039165 | orchestrator | Friday 20 February 2026 03:29:29 +0000 (0:00:03.578) 0:00:24.602 ******* 2026-02-20 03:29:31.039191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 03:29:31.039255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 03:29:31.039280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 03:29:31.039303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:29:31.039325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:29:31.039355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:29:31.039392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:31.039428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:37.310913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:37.310996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:37.311009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:37.311030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:37.311055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:37.311063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:37.311082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:37.311090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:37.311097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:37.311104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:37.311111 | orchestrator | 2026-02-20 03:29:37.311119 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-20 03:29:37.311127 | orchestrator | Friday 20 February 2026 03:29:31 +0000 (0:00:02.897) 0:00:27.500 ******* 2026-02-20 03:29:37.311140 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:29:37.311147 | orchestrator | 2026-02-20 03:29:37.311153 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-20 03:29:37.311163 | orchestrator | Friday 20 February 2026 03:29:32 +0000 (0:00:00.142) 0:00:27.642 ******* 2026-02-20 03:29:37.311170 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:29:37.311176 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:29:37.311182 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:29:37.311188 | orchestrator | 2026-02-20 03:29:37.311195 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-20 03:29:37.311201 | orchestrator | Friday 20 February 2026 03:29:32 +0000 (0:00:00.533) 0:00:28.176 ******* 2026-02-20 03:29:37.311208 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:29:37.311214 | orchestrator | 2026-02-20 03:29:37.311221 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-20 03:29:37.311227 | orchestrator | Friday 20 February 2026 03:29:33 +0000 (0:00:00.539) 0:00:28.715 ******* 2026-02-20 03:29:37.311235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 03:29:37.311249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 03:29:39.154285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 03:29:39.154365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:29:39.154402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:29:39.154411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:29:39.154421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:39.154442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:39.154451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:39.154460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:39.154547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:39.154561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:39.154575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:39.154590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:39.154614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:39.965058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:39.965189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:39.965240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:39.965266 | orchestrator | 2026-02-20 03:29:39.965280 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-20 03:29:39.965293 | orchestrator | Friday 20 February 2026 03:29:39 +0000 (0:00:06.013) 0:00:34.729 ******* 2026-02-20 03:29:39.965306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 03:29:39.965320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 03:29:39.965350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 03:29:39.965363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 03:29:39.965386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 03:29:39.965402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:29:39.965414 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:29:39.965427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 03:29:39.965438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 03:29:39.965450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 03:29:39.965469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 03:29:40.695975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 03:29:40.696109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:29:40.696128 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:29:40.696144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 03:29:40.696158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 03:29:40.696172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 03:29:40.696206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 03:29:40.696236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 03:29:40.696254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:29:40.696265 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:29:40.696277 | orchestrator | 2026-02-20 03:29:40.696289 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-20 03:29:40.696302 | orchestrator | Friday 20 February 2026 03:29:40 +0000 (0:00:00.918) 0:00:35.647 ******* 2026-02-20 03:29:40.696314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 03:29:40.696326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 03:29:40.696337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 03:29:40.696363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 03:29:41.014262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 03:29:41.014388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:29:41.014406 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:29:41.014422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 03:29:41.014436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 03:29:41.014448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 03:29:41.014525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 03:29:41.014557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 03:29:41.014576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:29:41.014587 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:29:41.014599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 03:29:41.014611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 03:29:41.014631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 03:29:41.014642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 03:29:41.014662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 03:29:45.591954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:29:45.592064 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:29:45.592080 | orchestrator | 2026-02-20 03:29:45.592093 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-20 03:29:45.592105 | orchestrator | Friday 20 February 2026 03:29:41 +0000 (0:00:00.942) 0:00:36.589 ******* 2026-02-20 03:29:45.592119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 03:29:45.592134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 03:29:45.592165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 03:29:45.592195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:29:45.592216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:29:45.592229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:29:45.592241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:45.592261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:45.592273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:45.592285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:45.592305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:56.663788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:56.663941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:56.663961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:56.664005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:56.664028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:56.664058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:56.664110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:29:56.664134 | orchestrator | 2026-02-20 03:29:56.664156 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-20 03:29:56.664176 | orchestrator | Friday 20 February 2026 03:29:47 +0000 (0:00:06.497) 0:00:43.087 ******* 2026-02-20 03:29:56.664190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 03:29:56.664216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 03:29:56.664228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 03:29:56.664240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:29:56.664268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:30:04.717029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:30:04.717150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:04.717206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:04.717230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:04.717253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:04.717275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:04.717338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:04.717364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:04.717397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:04.717419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:04.717439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:04.717489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:04.717511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:04.717532 | orchestrator | 2026-02-20 03:30:04.717554 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-20 03:30:04.717577 | orchestrator | Friday 20 February 2026 03:30:01 +0000 (0:00:13.538) 0:00:56.626 ******* 2026-02-20 03:30:04.717615 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-20 03:30:08.853214 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-20 03:30:08.853305 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-20 03:30:08.853340 | orchestrator | 2026-02-20 03:30:08.853353 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-20 03:30:08.853364 | orchestrator | Friday 20 February 2026 03:30:04 +0000 (0:00:03.665) 0:01:00.292 ******* 2026-02-20 03:30:08.853375 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-20 03:30:08.853386 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-20 03:30:08.853397 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-20 03:30:08.853408 | orchestrator | 2026-02-20 03:30:08.853419 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-20 03:30:08.853430 | orchestrator | Friday 20 February 2026 03:30:07 +0000 (0:00:02.375) 0:01:02.667 ******* 2026-02-20 03:30:08.853443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 03:30:08.853516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 03:30:08.853530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 03:30:08.853570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:30:08.853592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 03:30:08.853604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 03:30:08.853617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 03:30:08.853629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:30:08.853640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 03:30:08.853652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 03:30:08.853682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 03:30:11.546563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:30:11.546661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 03:30:11.546677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 03:30:11.546690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 03:30:11.546702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:11.546714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:11.546774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:11.546789 | orchestrator | 2026-02-20 03:30:11.546803 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-20 03:30:11.546815 | orchestrator | Friday 20 February 2026 03:30:09 +0000 (0:00:02.858) 0:01:05.526 ******* 2026-02-20 03:30:11.546827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 03:30:11.546839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 03:30:11.546851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 03:30:11.546871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:30:11.546894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 03:30:12.485041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 03:30:12.485101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 03:30:12.485111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:30:12.485119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 03:30:12.485126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 03:30:12.485155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 03:30:12.485173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:30:12.485179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 03:30:12.485185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 03:30:12.485191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 03:30:12.485198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:12.485209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:12.485218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:12.485225 | orchestrator | 2026-02-20 03:30:12.485233 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-20 03:30:12.485243 | orchestrator | Friday 20 February 2026 03:30:12 +0000 (0:00:02.525) 0:01:08.051 ******* 2026-02-20 03:30:13.451478 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:30:13.451579 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:30:13.451592 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:30:13.451604 | orchestrator | 2026-02-20 03:30:13.451614 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-20 03:30:13.451625 | orchestrator | Friday 20 February 2026 03:30:12 +0000 (0:00:00.307) 0:01:08.359 ******* 2026-02-20 03:30:13.451638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 03:30:13.451652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 03:30:13.451664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 03:30:13.451699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 03:30:13.451725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 03:30:13.451752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:30:13.451764 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:30:13.451774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 03:30:13.451786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 03:30:13.451804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 03:30:13.451832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 03:30:13.451857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 03:30:13.451885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:30:16.864433 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:30:16.864593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-20 03:30:16.864612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 03:30:16.864646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 03:30:16.864658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 03:30:16.864670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 03:30:16.864694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:30:16.864705 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:30:16.864715 | orchestrator | 2026-02-20 03:30:16.864742 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-20 03:30:16.864754 | orchestrator | Friday 20 February 2026 03:30:13 +0000 (0:00:00.784) 0:01:09.144 ******* 2026-02-20 03:30:16.864765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 03:30:16.864776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 03:30:16.864794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-20 03:30:16.864809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:30:16.864827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:30:18.674905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-20 03:30:18.674981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:18.675007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:18.675013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:18.675019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:18.675037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:18.675053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:18.675059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:18.675065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:18.675074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:18.675080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:18.675085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:18.675093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:30:18.675099 | orchestrator | 2026-02-20 03:30:18.675105 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-20 03:30:18.675112 | orchestrator | Friday 20 February 2026 03:30:18 +0000 (0:00:04.799) 0:01:13.944 ******* 2026-02-20 03:30:18.675117 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:30:18.675126 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:31:36.490555 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:31:36.490682 | orchestrator | 2026-02-20 03:31:36.490696 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-20 03:31:36.490707 | orchestrator | Friday 20 February 2026 03:30:18 +0000 (0:00:00.304) 0:01:14.249 ******* 2026-02-20 03:31:36.490717 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-20 03:31:36.490725 | orchestrator | 2026-02-20 03:31:36.490734 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-20 03:31:36.490743 | orchestrator | Friday 20 February 2026 03:30:20 +0000 (0:00:02.003) 0:01:16.252 ******* 2026-02-20 03:31:36.490775 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-20 03:31:36.490785 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-20 03:31:36.490793 | orchestrator | 2026-02-20 03:31:36.490802 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-20 03:31:36.490811 | orchestrator | Friday 20 February 2026 03:30:22 +0000 (0:00:02.163) 0:01:18.416 ******* 2026-02-20 03:31:36.490819 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:31:36.490828 | orchestrator | 2026-02-20 03:31:36.490836 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-20 03:31:36.490845 | orchestrator | Friday 20 February 2026 03:30:38 +0000 (0:00:15.367) 0:01:33.784 ******* 2026-02-20 03:31:36.490853 | orchestrator | 2026-02-20 03:31:36.490862 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-20 03:31:36.490872 | orchestrator | Friday 20 February 2026 03:30:38 +0000 (0:00:00.069) 0:01:33.853 ******* 2026-02-20 03:31:36.490881 | orchestrator | 2026-02-20 03:31:36.490889 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-20 03:31:36.490898 | orchestrator | Friday 20 February 2026 03:30:38 +0000 (0:00:00.083) 0:01:33.936 ******* 2026-02-20 03:31:36.490906 | orchestrator | 2026-02-20 03:31:36.490915 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-20 03:31:36.490924 | orchestrator | Friday 20 February 2026 03:30:38 +0000 (0:00:00.071) 0:01:34.007 ******* 2026-02-20 03:31:36.490932 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:31:36.490941 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:31:36.490949 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:31:36.490958 | orchestrator | 2026-02-20 03:31:36.490967 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-20 03:31:36.490975 | orchestrator | Friday 20 February 2026 03:30:46 +0000 (0:00:08.457) 0:01:42.465 ******* 2026-02-20 03:31:36.490984 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:31:36.490993 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:31:36.491001 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:31:36.491011 | orchestrator | 2026-02-20 03:31:36.491021 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-20 03:31:36.491031 | orchestrator | Friday 20 February 2026 03:30:52 +0000 (0:00:05.872) 0:01:48.338 ******* 2026-02-20 03:31:36.491041 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:31:36.491051 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:31:36.491061 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:31:36.491071 | orchestrator | 2026-02-20 03:31:36.491081 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-20 03:31:36.491090 | orchestrator | Friday 20 February 2026 03:31:01 +0000 (0:00:08.470) 0:01:56.808 ******* 2026-02-20 03:31:36.491101 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:31:36.491110 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:31:36.491120 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:31:36.491130 | orchestrator | 2026-02-20 03:31:36.491140 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-20 03:31:36.491149 | orchestrator | Friday 20 February 2026 03:31:07 +0000 (0:00:05.894) 0:02:02.702 ******* 2026-02-20 03:31:36.491159 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:31:36.491169 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:31:36.491178 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:31:36.491188 | orchestrator | 2026-02-20 03:31:36.491198 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-20 03:31:36.491207 | orchestrator | Friday 20 February 2026 03:31:17 +0000 (0:00:10.875) 0:02:13.577 ******* 2026-02-20 03:31:36.491217 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:31:36.491227 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:31:36.491237 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:31:36.491246 | orchestrator | 2026-02-20 03:31:36.491256 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-20 03:31:36.491272 | orchestrator | Friday 20 February 2026 03:31:29 +0000 (0:00:11.114) 0:02:24.691 ******* 2026-02-20 03:31:36.491283 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:31:36.491291 | orchestrator | 2026-02-20 03:31:36.491300 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:31:36.491310 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-20 03:31:36.491344 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 03:31:36.491354 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 03:31:36.491363 | orchestrator | 2026-02-20 03:31:36.491372 | orchestrator | 2026-02-20 03:31:36.491380 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:31:36.491390 | orchestrator | Friday 20 February 2026 03:31:36 +0000 (0:00:07.036) 0:02:31.728 ******* 2026-02-20 03:31:36.491398 | orchestrator | =============================================================================== 2026-02-20 03:31:36.491431 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.37s 2026-02-20 03:31:36.491440 | orchestrator | designate : Copying over designate.conf -------------------------------- 13.54s 2026-02-20 03:31:36.491463 | orchestrator | designate : Restart designate-worker container ------------------------- 11.11s 2026-02-20 03:31:36.491472 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.88s 2026-02-20 03:31:36.491481 | orchestrator | designate : Restart designate-central container ------------------------- 8.47s 2026-02-20 03:31:36.491489 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.46s 2026-02-20 03:31:36.491498 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.04s 2026-02-20 03:31:36.491507 | orchestrator | designate : Copying over config.json files for services ----------------- 6.50s 2026-02-20 03:31:36.491515 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.24s 2026-02-20 03:31:36.491524 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.01s 2026-02-20 03:31:36.491532 | orchestrator | designate : Restart designate-producer container ------------------------ 5.89s 2026-02-20 03:31:36.491541 | orchestrator | designate : Restart designate-api container ----------------------------- 5.87s 2026-02-20 03:31:36.491550 | orchestrator | designate : Check designate containers ---------------------------------- 4.80s 2026-02-20 03:31:36.491558 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.94s 2026-02-20 03:31:36.491567 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.67s 2026-02-20 03:31:36.491576 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.58s 2026-02-20 03:31:36.491584 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.41s 2026-02-20 03:31:36.491593 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.18s 2026-02-20 03:31:36.491601 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.12s 2026-02-20 03:31:36.491610 | orchestrator | designate : Ensuring config directories exist --------------------------- 2.90s 2026-02-20 03:31:38.669303 | orchestrator | 2026-02-20 03:31:38 | INFO  | Task c0ce56fb-234b-4ede-825f-161d2d71c81d (octavia) was prepared for execution. 2026-02-20 03:31:38.669454 | orchestrator | 2026-02-20 03:31:38 | INFO  | It takes a moment until task c0ce56fb-234b-4ede-825f-161d2d71c81d (octavia) has been started and output is visible here. 2026-02-20 03:33:39.773190 | orchestrator | 2026-02-20 03:33:39.773400 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:33:39.773429 | orchestrator | 2026-02-20 03:33:39.773448 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:33:39.773497 | orchestrator | Friday 20 February 2026 03:31:42 +0000 (0:00:00.187) 0:00:00.187 ******* 2026-02-20 03:33:39.773517 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:33:39.773535 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:33:39.773552 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:33:39.773571 | orchestrator | 2026-02-20 03:33:39.773589 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:33:39.773605 | orchestrator | Friday 20 February 2026 03:31:42 +0000 (0:00:00.234) 0:00:00.422 ******* 2026-02-20 03:33:39.773622 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-20 03:33:39.773641 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-20 03:33:39.773660 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-20 03:33:39.773678 | orchestrator | 2026-02-20 03:33:39.773696 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-20 03:33:39.773714 | orchestrator | 2026-02-20 03:33:39.773733 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-20 03:33:39.773750 | orchestrator | Friday 20 February 2026 03:31:42 +0000 (0:00:00.348) 0:00:00.771 ******* 2026-02-20 03:33:39.773769 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:33:39.773788 | orchestrator | 2026-02-20 03:33:39.773806 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-20 03:33:39.773824 | orchestrator | Friday 20 February 2026 03:31:43 +0000 (0:00:00.483) 0:00:01.254 ******* 2026-02-20 03:33:39.773842 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-20 03:33:39.773860 | orchestrator | 2026-02-20 03:33:39.773879 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-20 03:33:39.773896 | orchestrator | Friday 20 February 2026 03:31:46 +0000 (0:00:03.296) 0:00:04.551 ******* 2026-02-20 03:33:39.773914 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-20 03:33:39.773932 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-20 03:33:39.773949 | orchestrator | 2026-02-20 03:33:39.773983 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-20 03:33:39.774003 | orchestrator | Friday 20 February 2026 03:31:52 +0000 (0:00:06.290) 0:00:10.841 ******* 2026-02-20 03:33:39.774091 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-20 03:33:39.774111 | orchestrator | 2026-02-20 03:33:39.774127 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-20 03:33:39.774143 | orchestrator | Friday 20 February 2026 03:31:55 +0000 (0:00:03.135) 0:00:13.977 ******* 2026-02-20 03:33:39.774159 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-20 03:33:39.774176 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-20 03:33:39.774194 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-20 03:33:39.774210 | orchestrator | 2026-02-20 03:33:39.774226 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-20 03:33:39.774243 | orchestrator | Friday 20 February 2026 03:32:04 +0000 (0:00:08.094) 0:00:22.071 ******* 2026-02-20 03:33:39.774258 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-20 03:33:39.774275 | orchestrator | 2026-02-20 03:33:39.774292 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-20 03:33:39.774330 | orchestrator | Friday 20 February 2026 03:32:07 +0000 (0:00:03.185) 0:00:25.257 ******* 2026-02-20 03:33:39.774347 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-20 03:33:39.774363 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-20 03:33:39.774379 | orchestrator | 2026-02-20 03:33:39.774394 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-20 03:33:39.774408 | orchestrator | Friday 20 February 2026 03:32:14 +0000 (0:00:07.021) 0:00:32.279 ******* 2026-02-20 03:33:39.774437 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-20 03:33:39.774454 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-20 03:33:39.774469 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-20 03:33:39.774485 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-20 03:33:39.774501 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-20 03:33:39.774517 | orchestrator | 2026-02-20 03:33:39.774532 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-20 03:33:39.774548 | orchestrator | Friday 20 February 2026 03:32:29 +0000 (0:00:15.124) 0:00:47.403 ******* 2026-02-20 03:33:39.774563 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:33:39.774580 | orchestrator | 2026-02-20 03:33:39.774596 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-20 03:33:39.774613 | orchestrator | Friday 20 February 2026 03:32:30 +0000 (0:00:00.730) 0:00:48.133 ******* 2026-02-20 03:33:39.774629 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:33:39.774645 | orchestrator | 2026-02-20 03:33:39.774661 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-20 03:33:39.774677 | orchestrator | Friday 20 February 2026 03:32:34 +0000 (0:00:04.830) 0:00:52.964 ******* 2026-02-20 03:33:39.774693 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:33:39.774709 | orchestrator | 2026-02-20 03:33:39.774726 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-20 03:33:39.774768 | orchestrator | Friday 20 February 2026 03:32:39 +0000 (0:00:04.039) 0:00:57.003 ******* 2026-02-20 03:33:39.774785 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:33:39.774803 | orchestrator | 2026-02-20 03:33:39.774819 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-20 03:33:39.774835 | orchestrator | Friday 20 February 2026 03:32:42 +0000 (0:00:03.078) 0:01:00.081 ******* 2026-02-20 03:33:39.774853 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-20 03:33:39.774863 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-20 03:33:39.774872 | orchestrator | 2026-02-20 03:33:39.774882 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-20 03:33:39.774892 | orchestrator | Friday 20 February 2026 03:32:52 +0000 (0:00:10.296) 0:01:10.378 ******* 2026-02-20 03:33:39.774906 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-20 03:33:39.774917 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-20 03:33:39.774928 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-20 03:33:39.774939 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-20 03:33:39.774948 | orchestrator | 2026-02-20 03:33:39.774958 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-20 03:33:39.774968 | orchestrator | Friday 20 February 2026 03:33:07 +0000 (0:00:15.108) 0:01:25.487 ******* 2026-02-20 03:33:39.774977 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:33:39.774987 | orchestrator | 2026-02-20 03:33:39.774996 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-20 03:33:39.775006 | orchestrator | Friday 20 February 2026 03:33:12 +0000 (0:00:04.660) 0:01:30.147 ******* 2026-02-20 03:33:39.775015 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:33:39.775025 | orchestrator | 2026-02-20 03:33:39.775034 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-20 03:33:39.775044 | orchestrator | Friday 20 February 2026 03:33:17 +0000 (0:00:05.076) 0:01:35.223 ******* 2026-02-20 03:33:39.775065 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:33:39.775074 | orchestrator | 2026-02-20 03:33:39.775093 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-20 03:33:39.775103 | orchestrator | Friday 20 February 2026 03:33:17 +0000 (0:00:00.222) 0:01:35.445 ******* 2026-02-20 03:33:39.775112 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:33:39.775122 | orchestrator | 2026-02-20 03:33:39.775131 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-20 03:33:39.775141 | orchestrator | Friday 20 February 2026 03:33:21 +0000 (0:00:04.263) 0:01:39.708 ******* 2026-02-20 03:33:39.775150 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:33:39.775160 | orchestrator | 2026-02-20 03:33:39.775169 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-20 03:33:39.775177 | orchestrator | Friday 20 February 2026 03:33:22 +0000 (0:00:01.039) 0:01:40.748 ******* 2026-02-20 03:33:39.775185 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:33:39.775192 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:33:39.775200 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:33:39.775208 | orchestrator | 2026-02-20 03:33:39.775216 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-20 03:33:39.775224 | orchestrator | Friday 20 February 2026 03:33:27 +0000 (0:00:04.997) 0:01:45.745 ******* 2026-02-20 03:33:39.775232 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:33:39.775239 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:33:39.775247 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:33:39.775255 | orchestrator | 2026-02-20 03:33:39.775263 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-20 03:33:39.775271 | orchestrator | Friday 20 February 2026 03:33:32 +0000 (0:00:04.489) 0:01:50.235 ******* 2026-02-20 03:33:39.775278 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:33:39.775286 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:33:39.775294 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:33:39.775302 | orchestrator | 2026-02-20 03:33:39.775336 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-20 03:33:39.775345 | orchestrator | Friday 20 February 2026 03:33:33 +0000 (0:00:00.989) 0:01:51.225 ******* 2026-02-20 03:33:39.775353 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:33:39.775360 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:33:39.775368 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:33:39.775376 | orchestrator | 2026-02-20 03:33:39.775384 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-20 03:33:39.775392 | orchestrator | Friday 20 February 2026 03:33:35 +0000 (0:00:01.888) 0:01:53.113 ******* 2026-02-20 03:33:39.775400 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:33:39.775408 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:33:39.775415 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:33:39.775423 | orchestrator | 2026-02-20 03:33:39.775431 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-20 03:33:39.775439 | orchestrator | Friday 20 February 2026 03:33:36 +0000 (0:00:01.232) 0:01:54.346 ******* 2026-02-20 03:33:39.775446 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:33:39.775454 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:33:39.775462 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:33:39.775470 | orchestrator | 2026-02-20 03:33:39.775478 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-20 03:33:39.775485 | orchestrator | Friday 20 February 2026 03:33:37 +0000 (0:00:01.195) 0:01:55.542 ******* 2026-02-20 03:33:39.775493 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:33:39.775501 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:33:39.775509 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:33:39.775517 | orchestrator | 2026-02-20 03:33:39.775532 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-20 03:34:07.011271 | orchestrator | Friday 20 February 2026 03:33:39 +0000 (0:00:02.211) 0:01:57.753 ******* 2026-02-20 03:34:07.011504 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:34:07.011533 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:34:07.011552 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:34:07.011569 | orchestrator | 2026-02-20 03:34:07.011586 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-20 03:34:07.011604 | orchestrator | Friday 20 February 2026 03:33:42 +0000 (0:00:02.595) 0:02:00.349 ******* 2026-02-20 03:34:07.011621 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:34:07.011640 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:34:07.011658 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:34:07.011678 | orchestrator | 2026-02-20 03:34:07.011694 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-20 03:34:07.011710 | orchestrator | Friday 20 February 2026 03:33:43 +0000 (0:00:00.687) 0:02:01.036 ******* 2026-02-20 03:34:07.011728 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:34:07.011747 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:34:07.011766 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:34:07.011784 | orchestrator | 2026-02-20 03:34:07.011802 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-20 03:34:07.011819 | orchestrator | Friday 20 February 2026 03:33:46 +0000 (0:00:03.101) 0:02:04.138 ******* 2026-02-20 03:34:07.011838 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:34:07.011856 | orchestrator | 2026-02-20 03:34:07.011874 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-20 03:34:07.011894 | orchestrator | Friday 20 February 2026 03:33:46 +0000 (0:00:00.515) 0:02:04.654 ******* 2026-02-20 03:34:07.011912 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:34:07.011929 | orchestrator | 2026-02-20 03:34:07.011947 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-20 03:34:07.011966 | orchestrator | Friday 20 February 2026 03:33:49 +0000 (0:00:03.260) 0:02:07.914 ******* 2026-02-20 03:34:07.011983 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:34:07.012003 | orchestrator | 2026-02-20 03:34:07.012022 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-20 03:34:07.012042 | orchestrator | Friday 20 February 2026 03:33:52 +0000 (0:00:03.077) 0:02:10.992 ******* 2026-02-20 03:34:07.012063 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-20 03:34:07.012105 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-20 03:34:07.012126 | orchestrator | 2026-02-20 03:34:07.012145 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-20 03:34:07.012165 | orchestrator | Friday 20 February 2026 03:34:00 +0000 (0:00:07.714) 0:02:18.706 ******* 2026-02-20 03:34:07.012183 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:34:07.012201 | orchestrator | 2026-02-20 03:34:07.012220 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-20 03:34:07.012238 | orchestrator | Friday 20 February 2026 03:34:04 +0000 (0:00:03.866) 0:02:22.572 ******* 2026-02-20 03:34:07.012258 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:34:07.012277 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:34:07.012326 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:34:07.012344 | orchestrator | 2026-02-20 03:34:07.012363 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-20 03:34:07.012382 | orchestrator | Friday 20 February 2026 03:34:05 +0000 (0:00:00.452) 0:02:23.024 ******* 2026-02-20 03:34:07.012405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 03:34:07.012485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 03:34:07.012510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 03:34:07.012530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-20 03:34:07.012559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-20 03:34:07.012578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-20 03:34:07.012610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:07.012632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:07.012663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:08.446252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:08.446432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:08.446567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:08.446590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:34:08.446645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:34:08.446669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:34:08.446691 | orchestrator | 2026-02-20 03:34:08.446713 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-20 03:34:08.446736 | orchestrator | Friday 20 February 2026 03:34:07 +0000 (0:00:02.407) 0:02:25.432 ******* 2026-02-20 03:34:08.446749 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:34:08.446763 | orchestrator | 2026-02-20 03:34:08.446776 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-20 03:34:08.446789 | orchestrator | Friday 20 February 2026 03:34:07 +0000 (0:00:00.126) 0:02:25.559 ******* 2026-02-20 03:34:08.446802 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:34:08.446836 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:34:08.446849 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:34:08.446861 | orchestrator | 2026-02-20 03:34:08.446874 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-20 03:34:08.446887 | orchestrator | Friday 20 February 2026 03:34:07 +0000 (0:00:00.284) 0:02:25.844 ******* 2026-02-20 03:34:08.446902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-20 03:34:08.446924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 03:34:08.446948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 03:34:08.446962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 03:34:08.446973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:34:08.446985 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:34:08.447007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-20 03:34:13.183058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 03:34:13.183182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 03:34:13.183221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 03:34:13.183252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:34:13.183276 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:34:13.183346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-20 03:34:13.183359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 03:34:13.183419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 03:34:13.183440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 03:34:13.183462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:34:13.183474 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:34:13.183485 | orchestrator | 2026-02-20 03:34:13.183498 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-20 03:34:13.183510 | orchestrator | Friday 20 February 2026 03:34:08 +0000 (0:00:00.689) 0:02:26.533 ******* 2026-02-20 03:34:13.183521 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:34:13.183533 | orchestrator | 2026-02-20 03:34:13.183544 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-20 03:34:13.183555 | orchestrator | Friday 20 February 2026 03:34:09 +0000 (0:00:00.659) 0:02:27.193 ******* 2026-02-20 03:34:13.183566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 03:34:13.183580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 03:34:13.183605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 03:34:14.703048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-20 03:34:14.703150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-20 03:34:14.703165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-20 03:34:14.703178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:14.703191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:14.703203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:14.703271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:14.703337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:14.703351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:14.703363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:34:14.703376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:34:14.703388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:34:14.703400 | orchestrator | 2026-02-20 03:34:14.703414 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-20 03:34:14.703480 | orchestrator | Friday 20 February 2026 03:34:14 +0000 (0:00:04.978) 0:02:32.171 ******* 2026-02-20 03:34:14.703511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-20 03:34:14.797832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 03:34:14.797933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 03:34:14.797949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 03:34:14.797963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:34:14.797976 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:34:14.797990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-20 03:34:14.798078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 03:34:14.798125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 03:34:14.798139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 03:34:14.798150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:34:14.798162 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:34:14.798173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-20 03:34:14.798193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 03:34:14.798205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 03:34:14.798229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 03:34:15.572474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:34:15.572566 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:34:15.572580 | orchestrator | 2026-02-20 03:34:15.572590 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-20 03:34:15.572600 | orchestrator | Friday 20 February 2026 03:34:14 +0000 (0:00:00.618) 0:02:32.790 ******* 2026-02-20 03:34:15.572611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-20 03:34:15.572622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 03:34:15.572654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 03:34:15.572664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 03:34:15.572709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:34:15.572720 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:34:15.572729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-20 03:34:15.572739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 03:34:15.572748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 03:34:15.572764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 03:34:15.572773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:34:15.572782 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:34:15.572801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-20 03:34:20.251947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 03:34:20.252061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 03:34:20.252081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 03:34:20.252120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 03:34:20.252134 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:34:20.252148 | orchestrator | 2026-02-20 03:34:20.252161 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-20 03:34:20.252173 | orchestrator | Friday 20 February 2026 03:34:16 +0000 (0:00:01.277) 0:02:34.068 ******* 2026-02-20 03:34:20.252201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 03:34:20.252234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 03:34:20.252247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 03:34:20.252269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-20 03:34:20.252336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-20 03:34:20.252352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-20 03:34:20.252370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:20.252390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:35.197237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:35.197402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:35.197440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:35.197451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:35.197462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:34:35.197485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:34:35.197511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:34:35.197522 | orchestrator | 2026-02-20 03:34:35.197533 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-20 03:34:35.197544 | orchestrator | Friday 20 February 2026 03:34:21 +0000 (0:00:05.140) 0:02:39.208 ******* 2026-02-20 03:34:35.197553 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-20 03:34:35.197569 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-20 03:34:35.197578 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-20 03:34:35.197587 | orchestrator | 2026-02-20 03:34:35.197596 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-20 03:34:35.197604 | orchestrator | Friday 20 February 2026 03:34:22 +0000 (0:00:01.585) 0:02:40.794 ******* 2026-02-20 03:34:35.197615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 03:34:35.197625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 03:34:35.197639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 03:34:35.197654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-20 03:34:49.840624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-20 03:34:49.840793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-20 03:34:49.840821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:49.840841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:49.840859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:49.840898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:49.840930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:49.840954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:49.840972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:34:49.840991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:34:49.841009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:34:49.841027 | orchestrator | 2026-02-20 03:34:49.841046 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-20 03:34:49.841064 | orchestrator | Friday 20 February 2026 03:34:38 +0000 (0:00:15.421) 0:02:56.216 ******* 2026-02-20 03:34:49.841080 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:34:49.841097 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:34:49.841113 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:34:49.841129 | orchestrator | 2026-02-20 03:34:49.841148 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-20 03:34:49.841165 | orchestrator | Friday 20 February 2026 03:34:39 +0000 (0:00:01.667) 0:02:57.884 ******* 2026-02-20 03:34:49.841179 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-20 03:34:49.841197 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-20 03:34:49.841208 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-20 03:34:49.841219 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-20 03:34:49.841230 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-20 03:34:49.841241 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-20 03:34:49.841252 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-20 03:34:49.841270 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-20 03:34:49.841304 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-20 03:34:49.841316 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-20 03:34:49.841327 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-20 03:34:49.841338 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-20 03:34:49.841349 | orchestrator | 2026-02-20 03:34:49.841361 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-20 03:34:49.841372 | orchestrator | Friday 20 February 2026 03:34:44 +0000 (0:00:04.935) 0:03:02.819 ******* 2026-02-20 03:34:49.841382 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-20 03:34:49.841394 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-20 03:34:49.841414 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-20 03:34:58.425701 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-20 03:34:58.425811 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-20 03:34:58.425827 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-20 03:34:58.425839 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-20 03:34:58.425852 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-20 03:34:58.425864 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-20 03:34:58.425876 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-20 03:34:58.425887 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-20 03:34:58.425899 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-20 03:34:58.425911 | orchestrator | 2026-02-20 03:34:58.425924 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-20 03:34:58.425937 | orchestrator | Friday 20 February 2026 03:34:49 +0000 (0:00:05.005) 0:03:07.824 ******* 2026-02-20 03:34:58.425948 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-20 03:34:58.425960 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-20 03:34:58.425972 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-20 03:34:58.425983 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-20 03:34:58.425995 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-20 03:34:58.426006 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-20 03:34:58.426072 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-20 03:34:58.426084 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-20 03:34:58.426096 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-20 03:34:58.426107 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-20 03:34:58.426119 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-20 03:34:58.426131 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-20 03:34:58.426143 | orchestrator | 2026-02-20 03:34:58.426154 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-20 03:34:58.426166 | orchestrator | Friday 20 February 2026 03:34:55 +0000 (0:00:05.354) 0:03:13.179 ******* 2026-02-20 03:34:58.426181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 03:34:58.426237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 03:34:58.426338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 03:34:58.426361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-20 03:34:58.426380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-20 03:34:58.426400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-20 03:34:58.426421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:58.426456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:58.426477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-20 03:34:58.426508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-20 03:36:16.073755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-20 03:36:16.073896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-20 03:36:16.073916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:36:16.073955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:36:16.073984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-20 03:36:16.073997 | orchestrator | 2026-02-20 03:36:16.074011 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-20 03:36:16.074112 | orchestrator | Friday 20 February 2026 03:34:59 +0000 (0:00:04.077) 0:03:17.257 ******* 2026-02-20 03:36:16.074133 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:36:16.074153 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:36:16.074172 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:36:16.074189 | orchestrator | 2026-02-20 03:36:16.074206 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-20 03:36:16.074218 | orchestrator | Friday 20 February 2026 03:34:59 +0000 (0:00:00.316) 0:03:17.573 ******* 2026-02-20 03:36:16.074228 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:36:16.074267 | orchestrator | 2026-02-20 03:36:16.074281 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-20 03:36:16.074293 | orchestrator | Friday 20 February 2026 03:35:01 +0000 (0:00:02.047) 0:03:19.620 ******* 2026-02-20 03:36:16.074306 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:36:16.074318 | orchestrator | 2026-02-20 03:36:16.074330 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-20 03:36:16.074343 | orchestrator | Friday 20 February 2026 03:35:03 +0000 (0:00:02.022) 0:03:21.643 ******* 2026-02-20 03:36:16.074354 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:36:16.074365 | orchestrator | 2026-02-20 03:36:16.074376 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-20 03:36:16.074387 | orchestrator | Friday 20 February 2026 03:35:05 +0000 (0:00:02.173) 0:03:23.817 ******* 2026-02-20 03:36:16.074418 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:36:16.074438 | orchestrator | 2026-02-20 03:36:16.074455 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-20 03:36:16.074473 | orchestrator | Friday 20 February 2026 03:35:07 +0000 (0:00:02.137) 0:03:25.954 ******* 2026-02-20 03:36:16.074491 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:36:16.074509 | orchestrator | 2026-02-20 03:36:16.074528 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-20 03:36:16.074547 | orchestrator | Friday 20 February 2026 03:35:30 +0000 (0:00:22.914) 0:03:48.869 ******* 2026-02-20 03:36:16.074564 | orchestrator | 2026-02-20 03:36:16.074580 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-20 03:36:16.074604 | orchestrator | Friday 20 February 2026 03:35:30 +0000 (0:00:00.065) 0:03:48.934 ******* 2026-02-20 03:36:16.074615 | orchestrator | 2026-02-20 03:36:16.074626 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-20 03:36:16.074636 | orchestrator | Friday 20 February 2026 03:35:31 +0000 (0:00:00.066) 0:03:49.001 ******* 2026-02-20 03:36:16.074646 | orchestrator | 2026-02-20 03:36:16.074657 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-20 03:36:16.074668 | orchestrator | Friday 20 February 2026 03:35:31 +0000 (0:00:00.078) 0:03:49.080 ******* 2026-02-20 03:36:16.074678 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:36:16.074689 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:36:16.074700 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:36:16.074710 | orchestrator | 2026-02-20 03:36:16.074721 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-20 03:36:16.074732 | orchestrator | Friday 20 February 2026 03:35:43 +0000 (0:00:12.618) 0:04:01.698 ******* 2026-02-20 03:36:16.074742 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:36:16.074753 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:36:16.074763 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:36:16.074774 | orchestrator | 2026-02-20 03:36:16.074784 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-20 03:36:16.074801 | orchestrator | Friday 20 February 2026 03:35:51 +0000 (0:00:07.810) 0:04:09.508 ******* 2026-02-20 03:36:16.074819 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:36:16.074838 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:36:16.074856 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:36:16.074875 | orchestrator | 2026-02-20 03:36:16.074894 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-20 03:36:16.074914 | orchestrator | Friday 20 February 2026 03:36:02 +0000 (0:00:10.520) 0:04:20.029 ******* 2026-02-20 03:36:16.074932 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:36:16.074951 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:36:16.074962 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:36:16.074972 | orchestrator | 2026-02-20 03:36:16.074983 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-20 03:36:16.074994 | orchestrator | Friday 20 February 2026 03:36:07 +0000 (0:00:05.540) 0:04:25.569 ******* 2026-02-20 03:36:16.075004 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:36:16.075015 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:36:16.075025 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:36:16.075036 | orchestrator | 2026-02-20 03:36:16.075047 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:36:16.075059 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-20 03:36:16.075071 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-20 03:36:16.075090 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-20 03:36:16.075101 | orchestrator | 2026-02-20 03:36:16.075112 | orchestrator | 2026-02-20 03:36:16.075122 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:36:16.075133 | orchestrator | Friday 20 February 2026 03:36:16 +0000 (0:00:08.466) 0:04:34.036 ******* 2026-02-20 03:36:16.075144 | orchestrator | =============================================================================== 2026-02-20 03:36:16.075154 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.91s 2026-02-20 03:36:16.075172 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.42s 2026-02-20 03:36:16.075191 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.12s 2026-02-20 03:36:16.075209 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.11s 2026-02-20 03:36:16.075263 | orchestrator | octavia : Restart octavia-api container -------------------------------- 12.62s 2026-02-20 03:36:16.075286 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.52s 2026-02-20 03:36:16.075304 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.30s 2026-02-20 03:36:16.075323 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 8.47s 2026-02-20 03:36:16.075340 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.09s 2026-02-20 03:36:16.075356 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 7.81s 2026-02-20 03:36:16.075367 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.71s 2026-02-20 03:36:16.075378 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.02s 2026-02-20 03:36:16.075389 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.29s 2026-02-20 03:36:16.075399 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.54s 2026-02-20 03:36:16.075420 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.35s 2026-02-20 03:36:16.406981 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.14s 2026-02-20 03:36:16.407083 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.08s 2026-02-20 03:36:16.407096 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.01s 2026-02-20 03:36:16.407107 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.00s 2026-02-20 03:36:16.407118 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 4.98s 2026-02-20 03:36:18.681309 | orchestrator | 2026-02-20 03:36:18 | INFO  | Task 05336b21-9388-48cd-a42b-5b820b8e1441 (ceilometer) was prepared for execution. 2026-02-20 03:36:18.681382 | orchestrator | 2026-02-20 03:36:18 | INFO  | It takes a moment until task 05336b21-9388-48cd-a42b-5b820b8e1441 (ceilometer) has been started and output is visible here. 2026-02-20 03:36:41.338860 | orchestrator | 2026-02-20 03:36:41.338964 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:36:41.338975 | orchestrator | 2026-02-20 03:36:41.338982 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:36:41.338988 | orchestrator | Friday 20 February 2026 03:36:22 +0000 (0:00:00.253) 0:00:00.253 ******* 2026-02-20 03:36:41.338994 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:36:41.339000 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:36:41.339006 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:36:41.339012 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:36:41.339017 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:36:41.339023 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:36:41.339028 | orchestrator | 2026-02-20 03:36:41.339034 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:36:41.339039 | orchestrator | Friday 20 February 2026 03:36:23 +0000 (0:00:00.674) 0:00:00.928 ******* 2026-02-20 03:36:41.339049 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-02-20 03:36:41.339059 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-02-20 03:36:41.339070 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-02-20 03:36:41.339079 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-02-20 03:36:41.339089 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-02-20 03:36:41.339098 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-02-20 03:36:41.339107 | orchestrator | 2026-02-20 03:36:41.339116 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-02-20 03:36:41.339125 | orchestrator | 2026-02-20 03:36:41.339135 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-20 03:36:41.339144 | orchestrator | Friday 20 February 2026 03:36:23 +0000 (0:00:00.563) 0:00:01.491 ******* 2026-02-20 03:36:41.339179 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 03:36:41.339190 | orchestrator | 2026-02-20 03:36:41.339200 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-02-20 03:36:41.339209 | orchestrator | Friday 20 February 2026 03:36:25 +0000 (0:00:01.126) 0:00:02.618 ******* 2026-02-20 03:36:41.339219 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:36:41.339273 | orchestrator | 2026-02-20 03:36:41.339283 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-02-20 03:36:41.339292 | orchestrator | Friday 20 February 2026 03:36:25 +0000 (0:00:00.113) 0:00:02.731 ******* 2026-02-20 03:36:41.339313 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:36:41.339322 | orchestrator | 2026-02-20 03:36:41.339331 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-02-20 03:36:41.339340 | orchestrator | Friday 20 February 2026 03:36:25 +0000 (0:00:00.128) 0:00:02.860 ******* 2026-02-20 03:36:41.339349 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-20 03:36:41.339358 | orchestrator | 2026-02-20 03:36:41.339366 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-02-20 03:36:41.339375 | orchestrator | Friday 20 February 2026 03:36:28 +0000 (0:00:03.485) 0:00:06.345 ******* 2026-02-20 03:36:41.339385 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-20 03:36:41.339394 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-02-20 03:36:41.339403 | orchestrator | 2026-02-20 03:36:41.339412 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-02-20 03:36:41.339422 | orchestrator | Friday 20 February 2026 03:36:32 +0000 (0:00:03.929) 0:00:10.275 ******* 2026-02-20 03:36:41.339430 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-20 03:36:41.339439 | orchestrator | 2026-02-20 03:36:41.339448 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-02-20 03:36:41.339457 | orchestrator | Friday 20 February 2026 03:36:35 +0000 (0:00:03.115) 0:00:13.390 ******* 2026-02-20 03:36:41.339466 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-02-20 03:36:41.339474 | orchestrator | 2026-02-20 03:36:41.339483 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-02-20 03:36:41.339493 | orchestrator | Friday 20 February 2026 03:36:39 +0000 (0:00:03.907) 0:00:17.298 ******* 2026-02-20 03:36:41.339501 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:36:41.339510 | orchestrator | 2026-02-20 03:36:41.339520 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-02-20 03:36:41.339529 | orchestrator | Friday 20 February 2026 03:36:39 +0000 (0:00:00.138) 0:00:17.436 ******* 2026-02-20 03:36:41.339542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:36:41.339572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:36:41.339589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:36:41.339600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:36:41.339613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:36:41.339623 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:36:41.339633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:36:41.339648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:36:45.623837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:36:45.623967 | orchestrator | 2026-02-20 03:36:45.623984 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-02-20 03:36:45.623998 | orchestrator | Friday 20 February 2026 03:36:41 +0000 (0:00:01.408) 0:00:18.845 ******* 2026-02-20 03:36:45.624009 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-20 03:36:45.624021 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 03:36:45.624032 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-20 03:36:45.624043 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-20 03:36:45.624054 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-20 03:36:45.624064 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-20 03:36:45.624075 | orchestrator | 2026-02-20 03:36:45.624087 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-02-20 03:36:45.624099 | orchestrator | Friday 20 February 2026 03:36:42 +0000 (0:00:01.440) 0:00:20.285 ******* 2026-02-20 03:36:45.624110 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:36:45.624123 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:36:45.624133 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:36:45.624144 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:36:45.624155 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:36:45.624166 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:36:45.624177 | orchestrator | 2026-02-20 03:36:45.624188 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-02-20 03:36:45.624200 | orchestrator | Friday 20 February 2026 03:36:43 +0000 (0:00:00.567) 0:00:20.853 ******* 2026-02-20 03:36:45.624211 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:36:45.624222 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:36:45.624258 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:36:45.624269 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:36:45.624280 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:36:45.624290 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:36:45.624301 | orchestrator | 2026-02-20 03:36:45.624312 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-02-20 03:36:45.624325 | orchestrator | Friday 20 February 2026 03:36:44 +0000 (0:00:00.716) 0:00:21.569 ******* 2026-02-20 03:36:45.624336 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:36:45.624349 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:36:45.624361 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:36:45.624373 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:36:45.624386 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:36:45.624397 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:36:45.624410 | orchestrator | 2026-02-20 03:36:45.624422 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-02-20 03:36:45.624496 | orchestrator | Friday 20 February 2026 03:36:44 +0000 (0:00:00.580) 0:00:22.150 ******* 2026-02-20 03:36:45.624512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:36:45.624550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:36:45.624563 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:36:45.624597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:36:45.624610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:36:45.624622 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:36:45.624633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:36:45.624651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:36:45.624664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:36:45.624683 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:36:45.624694 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:36:45.624706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:36:45.624717 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:36:45.624736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:36:49.999961 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:36:50.000107 | orchestrator | 2026-02-20 03:36:50.000137 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-02-20 03:36:50.000159 | orchestrator | Friday 20 February 2026 03:36:45 +0000 (0:00:00.977) 0:00:23.128 ******* 2026-02-20 03:36:50.000178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:36:50.000212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:36:50.000279 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:36:50.000294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:36:50.000327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:36:50.000340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:36:50.000351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:36:50.000363 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:36:50.000396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:36:50.000409 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:36:50.000421 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:36:50.000450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:36:50.000483 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:36:50.000505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:36:50.000524 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:36:50.000543 | orchestrator | 2026-02-20 03:36:50.000564 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-02-20 03:36:50.000583 | orchestrator | Friday 20 February 2026 03:36:46 +0000 (0:00:00.918) 0:00:24.047 ******* 2026-02-20 03:36:50.000601 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 03:36:50.000619 | orchestrator | 2026-02-20 03:36:50.000638 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-02-20 03:36:50.000659 | orchestrator | Friday 20 February 2026 03:36:47 +0000 (0:00:00.650) 0:00:24.697 ******* 2026-02-20 03:36:50.000679 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:36:50.000701 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:36:50.000719 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:36:50.000738 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:36:50.000757 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:36:50.000777 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:36:50.000798 | orchestrator | 2026-02-20 03:36:50.000817 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-02-20 03:36:50.000837 | orchestrator | Friday 20 February 2026 03:36:47 +0000 (0:00:00.709) 0:00:25.407 ******* 2026-02-20 03:36:50.000849 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:36:50.000860 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:36:50.000871 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:36:50.000882 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:36:50.000892 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:36:50.000903 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:36:50.000913 | orchestrator | 2026-02-20 03:36:50.000924 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-02-20 03:36:50.000935 | orchestrator | Friday 20 February 2026 03:36:48 +0000 (0:00:00.885) 0:00:26.292 ******* 2026-02-20 03:36:50.000946 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:36:50.000957 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:36:50.000967 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:36:50.000978 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:36:50.000989 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:36:50.001000 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:36:50.001011 | orchestrator | 2026-02-20 03:36:50.001022 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-02-20 03:36:50.001033 | orchestrator | Friday 20 February 2026 03:36:49 +0000 (0:00:00.696) 0:00:26.988 ******* 2026-02-20 03:36:50.001044 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:36:50.001055 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:36:50.001065 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:36:50.001076 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:36:50.001086 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:36:50.001097 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:36:50.001108 | orchestrator | 2026-02-20 03:36:54.522529 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-02-20 03:36:54.522641 | orchestrator | Friday 20 February 2026 03:36:49 +0000 (0:00:00.528) 0:00:27.517 ******* 2026-02-20 03:36:54.522691 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 03:36:54.522701 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-20 03:36:54.522708 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-20 03:36:54.522715 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-20 03:36:54.522723 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-20 03:36:54.522730 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-20 03:36:54.522737 | orchestrator | 2026-02-20 03:36:54.522745 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-02-20 03:36:54.522753 | orchestrator | Friday 20 February 2026 03:36:51 +0000 (0:00:01.408) 0:00:28.926 ******* 2026-02-20 03:36:54.522776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:36:54.522788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:36:54.522797 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:36:54.522843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:36:54.522858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:36:54.522870 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:36:54.522883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:36:54.522931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:36:54.522941 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:36:54.522949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:36:54.522958 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:36:54.522970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:36:54.522978 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:36:54.522985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:36:54.522993 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:36:54.523002 | orchestrator | 2026-02-20 03:36:54.523015 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-02-20 03:36:54.523026 | orchestrator | Friday 20 February 2026 03:36:52 +0000 (0:00:00.752) 0:00:29.679 ******* 2026-02-20 03:36:54.523038 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:36:54.523052 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:36:54.523065 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:36:54.523077 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:36:54.523091 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:36:54.523104 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:36:54.523116 | orchestrator | 2026-02-20 03:36:54.523127 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-02-20 03:36:54.523135 | orchestrator | Friday 20 February 2026 03:36:52 +0000 (0:00:00.691) 0:00:30.371 ******* 2026-02-20 03:36:54.523150 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 03:36:54.523158 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-20 03:36:54.523167 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-20 03:36:54.523175 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-20 03:36:54.523183 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-20 03:36:54.523191 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-20 03:36:54.523199 | orchestrator | 2026-02-20 03:36:54.523208 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-02-20 03:36:54.523216 | orchestrator | Friday 20 February 2026 03:36:54 +0000 (0:00:01.271) 0:00:31.642 ******* 2026-02-20 03:36:54.523268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:36:59.961291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:36:59.961429 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:36:59.961464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:36:59.962386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:36:59.962442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:36:59.962476 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:36:59.962485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:36:59.962493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:36:59.962500 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:36:59.962507 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:36:59.962565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:36:59.962574 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:36:59.962588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:36:59.962595 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:36:59.962602 | orchestrator | 2026-02-20 03:36:59.962609 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-02-20 03:36:59.962616 | orchestrator | Friday 20 February 2026 03:36:55 +0000 (0:00:00.994) 0:00:32.637 ******* 2026-02-20 03:36:59.962622 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:36:59.962629 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:36:59.962635 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:36:59.962641 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:36:59.962647 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:36:59.962653 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:36:59.962659 | orchestrator | 2026-02-20 03:36:59.962666 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-02-20 03:36:59.962680 | orchestrator | Friday 20 February 2026 03:36:55 +0000 (0:00:00.712) 0:00:33.350 ******* 2026-02-20 03:36:59.962687 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:36:59.962693 | orchestrator | 2026-02-20 03:36:59.962699 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-02-20 03:36:59.962705 | orchestrator | Friday 20 February 2026 03:36:55 +0000 (0:00:00.138) 0:00:33.488 ******* 2026-02-20 03:36:59.962712 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:36:59.962718 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:36:59.962724 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:36:59.962730 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:36:59.962736 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:36:59.962742 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:36:59.962749 | orchestrator | 2026-02-20 03:36:59.962755 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-20 03:36:59.962761 | orchestrator | Friday 20 February 2026 03:36:56 +0000 (0:00:00.561) 0:00:34.049 ******* 2026-02-20 03:36:59.962768 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 03:36:59.962776 | orchestrator | 2026-02-20 03:36:59.962782 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-02-20 03:36:59.962788 | orchestrator | Friday 20 February 2026 03:36:57 +0000 (0:00:01.210) 0:00:35.260 ******* 2026-02-20 03:36:59.962795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:36:59.962809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:00.452081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:00.452200 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:00.452316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:00.452333 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:00.452346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:37:00.452359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:37:00.452390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:37:00.452403 | orchestrator | 2026-02-20 03:37:00.452416 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-02-20 03:37:00.452428 | orchestrator | Friday 20 February 2026 03:36:59 +0000 (0:00:02.213) 0:00:37.473 ******* 2026-02-20 03:37:00.452448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:37:00.452468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:37:00.452480 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:37:00.452492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:37:00.452504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:37:00.452515 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:37:00.452527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:37:00.452547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:37:02.154339 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:37:02.154465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:37:02.154507 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:37:02.154520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:37:02.154533 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:37:02.154546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:37:02.154558 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:37:02.154570 | orchestrator | 2026-02-20 03:37:02.154583 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-02-20 03:37:02.154596 | orchestrator | Friday 20 February 2026 03:37:00 +0000 (0:00:00.793) 0:00:38.267 ******* 2026-02-20 03:37:02.154609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:37:02.154621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:37:02.154652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:37:02.154678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:37:02.154690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:37:02.154702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:37:02.154712 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:37:02.154725 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:37:02.154736 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:37:02.154748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:37:02.154759 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:37:02.154769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:37:02.154785 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:37:02.154809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:37:09.285617 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:37:09.285779 | orchestrator | 2026-02-20 03:37:09.285808 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-02-20 03:37:09.285831 | orchestrator | Friday 20 February 2026 03:37:02 +0000 (0:00:01.396) 0:00:39.664 ******* 2026-02-20 03:37:09.285854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:09.285879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:09.285899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:09.285922 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:09.285944 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:09.286100 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:09.286129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:37:09.286150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:37:09.286170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:37:09.286189 | orchestrator | 2026-02-20 03:37:09.286208 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-02-20 03:37:09.286257 | orchestrator | Friday 20 February 2026 03:37:04 +0000 (0:00:02.492) 0:00:42.156 ******* 2026-02-20 03:37:09.286278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:09.286312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:09.286354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:18.361484 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:18.361641 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:18.361671 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:18.361693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:37:18.361735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:37:18.361762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:37:18.361773 | orchestrator | 2026-02-20 03:37:18.361785 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-02-20 03:37:18.361813 | orchestrator | Friday 20 February 2026 03:37:09 +0000 (0:00:04.644) 0:00:46.800 ******* 2026-02-20 03:37:18.361824 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 03:37:18.361835 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-20 03:37:18.361844 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-20 03:37:18.361854 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-20 03:37:18.361864 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-20 03:37:18.361873 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-20 03:37:18.361890 | orchestrator | 2026-02-20 03:37:18.361906 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-02-20 03:37:18.361923 | orchestrator | Friday 20 February 2026 03:37:10 +0000 (0:00:01.427) 0:00:48.228 ******* 2026-02-20 03:37:18.361939 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:37:18.361956 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:37:18.361972 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:37:18.361986 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:37:18.361996 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:37:18.362007 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:37:18.362077 | orchestrator | 2026-02-20 03:37:18.362089 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-02-20 03:37:18.362101 | orchestrator | Friday 20 February 2026 03:37:11 +0000 (0:00:00.564) 0:00:48.792 ******* 2026-02-20 03:37:18.362112 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:37:18.362123 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:37:18.362139 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:37:18.362157 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:37:18.362176 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:37:18.362194 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:37:18.362266 | orchestrator | 2026-02-20 03:37:18.362278 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-02-20 03:37:18.362289 | orchestrator | Friday 20 February 2026 03:37:12 +0000 (0:00:01.601) 0:00:50.394 ******* 2026-02-20 03:37:18.362306 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:37:18.362324 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:37:18.362341 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:37:18.362378 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:37:18.362397 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:37:18.362414 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:37:18.362432 | orchestrator | 2026-02-20 03:37:18.362450 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-02-20 03:37:18.362466 | orchestrator | Friday 20 February 2026 03:37:14 +0000 (0:00:01.415) 0:00:51.809 ******* 2026-02-20 03:37:18.362476 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 03:37:18.362485 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-20 03:37:18.362495 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-20 03:37:18.362504 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-20 03:37:18.362514 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-20 03:37:18.362523 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-20 03:37:18.362533 | orchestrator | 2026-02-20 03:37:18.362542 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-02-20 03:37:18.362552 | orchestrator | Friday 20 February 2026 03:37:15 +0000 (0:00:01.497) 0:00:53.306 ******* 2026-02-20 03:37:18.362563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:18.362575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:18.362593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:18.362615 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:19.175256 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:19.175381 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:37:19.175398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:37:19.175413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:37:19.175439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:37:19.175452 | orchestrator | 2026-02-20 03:37:19.175465 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-02-20 03:37:19.175478 | orchestrator | Friday 20 February 2026 03:37:18 +0000 (0:00:02.563) 0:00:55.870 ******* 2026-02-20 03:37:19.175490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:37:19.175526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:37:19.175540 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:37:19.175553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:37:19.175564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:37:19.175576 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:37:19.175588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:37:19.175599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:37:19.175611 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:37:19.175628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:37:19.175646 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:37:19.175664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:37:22.446500 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:37:22.446643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:37:22.446676 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:37:22.446697 | orchestrator | 2026-02-20 03:37:22.446718 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-02-20 03:37:22.446741 | orchestrator | Friday 20 February 2026 03:37:19 +0000 (0:00:00.817) 0:00:56.688 ******* 2026-02-20 03:37:22.446761 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:37:22.446781 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:37:22.446793 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:37:22.446804 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:37:22.446815 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:37:22.446826 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:37:22.446837 | orchestrator | 2026-02-20 03:37:22.446848 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-02-20 03:37:22.446859 | orchestrator | Friday 20 February 2026 03:37:19 +0000 (0:00:00.712) 0:00:57.400 ******* 2026-02-20 03:37:22.446872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:37:22.446886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:37:22.446938 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:37:22.446951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:37:22.446963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:37:22.446975 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:37:22.447006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-20 03:37:22.447019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 03:37:22.447031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:37:22.447042 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:37:22.447053 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:37:22.447070 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:37:22.447091 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:37:22.447107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-20 03:37:22.447125 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:37:22.447142 | orchestrator | 2026-02-20 03:37:22.447161 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-02-20 03:37:22.447181 | orchestrator | Friday 20 February 2026 03:37:20 +0000 (0:00:00.822) 0:00:58.223 ******* 2026-02-20 03:37:22.447243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:00.529915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:00.530098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:00.530119 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:00.530175 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:00.530282 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:00.530295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:38:00.530327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:38:00.530340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-20 03:38:00.530352 | orchestrator | 2026-02-20 03:38:00.530365 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-20 03:38:00.530378 | orchestrator | Friday 20 February 2026 03:37:22 +0000 (0:00:01.736) 0:00:59.960 ******* 2026-02-20 03:38:00.530389 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:38:00.530400 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:38:00.530411 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:38:00.530431 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:38:00.530442 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:38:00.530452 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:38:00.530463 | orchestrator | 2026-02-20 03:38:00.530474 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-02-20 03:38:00.530485 | orchestrator | Friday 20 February 2026 03:37:22 +0000 (0:00:00.560) 0:01:00.520 ******* 2026-02-20 03:38:00.530496 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:38:00.530507 | orchestrator | 2026-02-20 03:38:00.530518 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-20 03:38:00.530528 | orchestrator | Friday 20 February 2026 03:37:27 +0000 (0:00:04.589) 0:01:05.109 ******* 2026-02-20 03:38:00.530539 | orchestrator | 2026-02-20 03:38:00.530550 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-20 03:38:00.530561 | orchestrator | Friday 20 February 2026 03:37:27 +0000 (0:00:00.071) 0:01:05.181 ******* 2026-02-20 03:38:00.530572 | orchestrator | 2026-02-20 03:38:00.530589 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-20 03:38:00.530607 | orchestrator | Friday 20 February 2026 03:37:27 +0000 (0:00:00.069) 0:01:05.250 ******* 2026-02-20 03:38:00.530619 | orchestrator | 2026-02-20 03:38:00.530629 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-20 03:38:00.530640 | orchestrator | Friday 20 February 2026 03:37:27 +0000 (0:00:00.230) 0:01:05.481 ******* 2026-02-20 03:38:00.530651 | orchestrator | 2026-02-20 03:38:00.530667 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-20 03:38:00.530679 | orchestrator | Friday 20 February 2026 03:37:28 +0000 (0:00:00.069) 0:01:05.551 ******* 2026-02-20 03:38:00.530690 | orchestrator | 2026-02-20 03:38:00.530700 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-20 03:38:00.530711 | orchestrator | Friday 20 February 2026 03:37:28 +0000 (0:00:00.066) 0:01:05.617 ******* 2026-02-20 03:38:00.530722 | orchestrator | 2026-02-20 03:38:00.530733 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-02-20 03:38:00.530743 | orchestrator | Friday 20 February 2026 03:37:28 +0000 (0:00:00.070) 0:01:05.687 ******* 2026-02-20 03:38:00.530754 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:38:00.530765 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:38:00.530775 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:38:00.530786 | orchestrator | 2026-02-20 03:38:00.530797 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-02-20 03:38:00.530807 | orchestrator | Friday 20 February 2026 03:37:39 +0000 (0:00:11.220) 0:01:16.908 ******* 2026-02-20 03:38:00.530818 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:38:00.530829 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:38:00.530839 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:38:00.530850 | orchestrator | 2026-02-20 03:38:00.530861 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-02-20 03:38:00.530871 | orchestrator | Friday 20 February 2026 03:37:49 +0000 (0:00:09.857) 0:01:26.765 ******* 2026-02-20 03:38:00.530882 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:38:00.530893 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:38:00.530904 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:38:00.530914 | orchestrator | 2026-02-20 03:38:00.530925 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:38:00.530937 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-20 03:38:00.530950 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-20 03:38:00.530969 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-20 03:38:00.907364 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-20 03:38:00.907531 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-20 03:38:00.907557 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-20 03:38:00.907570 | orchestrator | 2026-02-20 03:38:00.907581 | orchestrator | 2026-02-20 03:38:00.907593 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:38:00.907624 | orchestrator | Friday 20 February 2026 03:38:00 +0000 (0:00:11.268) 0:01:38.034 ******* 2026-02-20 03:38:00.907647 | orchestrator | =============================================================================== 2026-02-20 03:38:00.907658 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 11.27s 2026-02-20 03:38:00.907669 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 11.22s 2026-02-20 03:38:00.907680 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.86s 2026-02-20 03:38:00.907690 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.64s 2026-02-20 03:38:00.907701 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.59s 2026-02-20 03:38:00.907712 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.93s 2026-02-20 03:38:00.907723 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 3.91s 2026-02-20 03:38:00.907734 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.49s 2026-02-20 03:38:00.907744 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.12s 2026-02-20 03:38:00.907755 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.56s 2026-02-20 03:38:00.907766 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.49s 2026-02-20 03:38:00.907776 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.21s 2026-02-20 03:38:00.907787 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.74s 2026-02-20 03:38:00.907798 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.60s 2026-02-20 03:38:00.907810 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.50s 2026-02-20 03:38:00.907821 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.44s 2026-02-20 03:38:00.907832 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.43s 2026-02-20 03:38:00.907842 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.42s 2026-02-20 03:38:00.907853 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.41s 2026-02-20 03:38:00.907864 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.41s 2026-02-20 03:38:03.124257 | orchestrator | 2026-02-20 03:38:03 | INFO  | Task 4a9f8f6e-beb6-43b4-be91-8980f1d768c9 (aodh) was prepared for execution. 2026-02-20 03:38:03.124361 | orchestrator | 2026-02-20 03:38:03 | INFO  | It takes a moment until task 4a9f8f6e-beb6-43b4-be91-8980f1d768c9 (aodh) has been started and output is visible here. 2026-02-20 03:38:34.142147 | orchestrator | 2026-02-20 03:38:34.142308 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:38:34.142328 | orchestrator | 2026-02-20 03:38:34.142341 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:38:34.142352 | orchestrator | Friday 20 February 2026 03:38:07 +0000 (0:00:00.250) 0:00:00.250 ******* 2026-02-20 03:38:34.142364 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:38:34.142376 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:38:34.142387 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:38:34.142398 | orchestrator | 2026-02-20 03:38:34.142446 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:38:34.142459 | orchestrator | Friday 20 February 2026 03:38:07 +0000 (0:00:00.315) 0:00:00.566 ******* 2026-02-20 03:38:34.142470 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-02-20 03:38:34.142481 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-02-20 03:38:34.142491 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-02-20 03:38:34.142502 | orchestrator | 2026-02-20 03:38:34.142516 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-02-20 03:38:34.142536 | orchestrator | 2026-02-20 03:38:34.142553 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-20 03:38:34.142571 | orchestrator | Friday 20 February 2026 03:38:07 +0000 (0:00:00.406) 0:00:00.972 ******* 2026-02-20 03:38:34.142592 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:38:34.142615 | orchestrator | 2026-02-20 03:38:34.142637 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-02-20 03:38:34.142656 | orchestrator | Friday 20 February 2026 03:38:08 +0000 (0:00:00.520) 0:00:01.492 ******* 2026-02-20 03:38:34.142675 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-02-20 03:38:34.142698 | orchestrator | 2026-02-20 03:38:34.142718 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-02-20 03:38:34.142738 | orchestrator | Friday 20 February 2026 03:38:11 +0000 (0:00:03.269) 0:00:04.762 ******* 2026-02-20 03:38:34.142751 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-02-20 03:38:34.142764 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-02-20 03:38:34.142776 | orchestrator | 2026-02-20 03:38:34.142788 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-02-20 03:38:34.142799 | orchestrator | Friday 20 February 2026 03:38:17 +0000 (0:00:06.345) 0:00:11.107 ******* 2026-02-20 03:38:34.142811 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-20 03:38:34.142824 | orchestrator | 2026-02-20 03:38:34.142835 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-02-20 03:38:34.142847 | orchestrator | Friday 20 February 2026 03:38:21 +0000 (0:00:03.328) 0:00:14.436 ******* 2026-02-20 03:38:34.142859 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-20 03:38:34.142871 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-02-20 03:38:34.142884 | orchestrator | 2026-02-20 03:38:34.142896 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-02-20 03:38:34.142908 | orchestrator | Friday 20 February 2026 03:38:25 +0000 (0:00:03.784) 0:00:18.220 ******* 2026-02-20 03:38:34.142921 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-20 03:38:34.142932 | orchestrator | 2026-02-20 03:38:34.142944 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-02-20 03:38:34.142956 | orchestrator | Friday 20 February 2026 03:38:28 +0000 (0:00:03.170) 0:00:21.391 ******* 2026-02-20 03:38:34.142968 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-02-20 03:38:34.142979 | orchestrator | 2026-02-20 03:38:34.142989 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-02-20 03:38:34.142999 | orchestrator | Friday 20 February 2026 03:38:32 +0000 (0:00:03.822) 0:00:25.213 ******* 2026-02-20 03:38:34.143014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 03:38:34.143075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 03:38:34.143090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 03:38:34.143102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-20 03:38:34.143115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-20 03:38:34.143126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-20 03:38:34.143138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:34.143266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:35.310851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:35.310940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:35.310952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:35.310961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:35.310974 | orchestrator | 2026-02-20 03:38:35.310989 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-02-20 03:38:35.311004 | orchestrator | Friday 20 February 2026 03:38:34 +0000 (0:00:02.047) 0:00:27.261 ******* 2026-02-20 03:38:35.311020 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:38:35.311034 | orchestrator | 2026-02-20 03:38:35.311044 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-02-20 03:38:35.311072 | orchestrator | Friday 20 February 2026 03:38:34 +0000 (0:00:00.129) 0:00:27.390 ******* 2026-02-20 03:38:35.311081 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:38:35.311088 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:38:35.311096 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:38:35.311104 | orchestrator | 2026-02-20 03:38:35.311112 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-02-20 03:38:35.311119 | orchestrator | Friday 20 February 2026 03:38:34 +0000 (0:00:00.450) 0:00:27.841 ******* 2026-02-20 03:38:35.311142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-20 03:38:35.311248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 03:38:35.311262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:38:35.311271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 03:38:35.311279 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:38:35.311288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-20 03:38:35.311303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 03:38:35.311312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:38:35.311334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 03:38:40.205701 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:38:40.205798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-20 03:38:40.205813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 03:38:40.205825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:38:40.205853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 03:38:40.205863 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:38:40.205873 | orchestrator | 2026-02-20 03:38:40.205882 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-20 03:38:40.205893 | orchestrator | Friday 20 February 2026 03:38:35 +0000 (0:00:00.587) 0:00:28.428 ******* 2026-02-20 03:38:40.205902 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:38:40.205911 | orchestrator | 2026-02-20 03:38:40.205920 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-02-20 03:38:40.205929 | orchestrator | Friday 20 February 2026 03:38:35 +0000 (0:00:00.662) 0:00:29.091 ******* 2026-02-20 03:38:40.205950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 03:38:40.205975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 03:38:40.205985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 03:38:40.206001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-20 03:38:40.206010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-20 03:38:40.206070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-20 03:38:40.206085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:40.206102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:40.814659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:40.814786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:40.814848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:40.814871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:40.814890 | orchestrator | 2026-02-20 03:38:40.814946 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-02-20 03:38:40.814973 | orchestrator | Friday 20 February 2026 03:38:40 +0000 (0:00:04.237) 0:00:33.329 ******* 2026-02-20 03:38:40.815026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-20 03:38:40.815050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 03:38:40.815096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:38:40.815119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 03:38:40.815155 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:38:40.815206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-20 03:38:40.815228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 03:38:40.815258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:38:40.815279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 03:38:40.815299 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:38:40.815335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-20 03:38:41.798267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 03:38:41.798381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:38:41.798405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 03:38:41.798421 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:38:41.798438 | orchestrator | 2026-02-20 03:38:41.798451 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-02-20 03:38:41.798464 | orchestrator | Friday 20 February 2026 03:38:40 +0000 (0:00:00.605) 0:00:33.935 ******* 2026-02-20 03:38:41.798497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-20 03:38:41.798514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 03:38:41.798529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:38:41.798598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 03:38:41.798618 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:38:41.798633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-20 03:38:41.798648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 03:38:41.798669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:38:41.798685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 03:38:41.798699 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:38:41.798723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-20 03:38:45.981087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 03:38:45.981222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 03:38:45.981237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 03:38:45.981248 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:38:45.981258 | orchestrator | 2026-02-20 03:38:45.981268 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-02-20 03:38:45.981278 | orchestrator | Friday 20 February 2026 03:38:41 +0000 (0:00:00.982) 0:00:34.918 ******* 2026-02-20 03:38:45.981302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 03:38:45.981314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 03:38:45.981358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 03:38:45.981368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-20 03:38:45.981378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-20 03:38:45.981387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-20 03:38:45.981400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:45.981410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:45.981426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:45.981442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:54.289767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:54.289881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:54.289897 | orchestrator | 2026-02-20 03:38:54.289911 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-02-20 03:38:54.289924 | orchestrator | Friday 20 February 2026 03:38:45 +0000 (0:00:04.184) 0:00:39.103 ******* 2026-02-20 03:38:54.289952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 03:38:54.289986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 03:38:54.289998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 03:38:54.290092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-20 03:38:54.290107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-20 03:38:54.290118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-20 03:38:54.290145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:54.290203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:54.290224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:54.290244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:54.290266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:59.422469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:59.422586 | orchestrator | 2026-02-20 03:38:59.422611 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-02-20 03:38:59.422631 | orchestrator | Friday 20 February 2026 03:38:54 +0000 (0:00:08.305) 0:00:47.408 ******* 2026-02-20 03:38:59.422647 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:38:59.422667 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:38:59.422685 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:38:59.422705 | orchestrator | 2026-02-20 03:38:59.422747 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-02-20 03:38:59.422772 | orchestrator | Friday 20 February 2026 03:38:56 +0000 (0:00:01.776) 0:00:49.185 ******* 2026-02-20 03:38:59.422803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 03:38:59.422841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 03:38:59.422854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-20 03:38:59.422887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-20 03:38:59.422900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-20 03:38:59.422912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-20 03:38:59.422971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:59.422985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:59.422999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:59.423011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-20 03:38:59.423032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-20 03:39:55.177614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-20 03:39:55.177730 | orchestrator | 2026-02-20 03:39:55.177784 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-20 03:39:55.177798 | orchestrator | Friday 20 February 2026 03:38:59 +0000 (0:00:03.359) 0:00:52.545 ******* 2026-02-20 03:39:55.177810 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:39:55.177855 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:39:55.177867 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:39:55.177877 | orchestrator | 2026-02-20 03:39:55.177888 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-02-20 03:39:55.177900 | orchestrator | Friday 20 February 2026 03:38:59 +0000 (0:00:00.303) 0:00:52.848 ******* 2026-02-20 03:39:55.177910 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:39:55.177921 | orchestrator | 2026-02-20 03:39:55.177945 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-02-20 03:39:55.177956 | orchestrator | Friday 20 February 2026 03:39:01 +0000 (0:00:02.124) 0:00:54.973 ******* 2026-02-20 03:39:55.177967 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:39:55.177978 | orchestrator | 2026-02-20 03:39:55.178075 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-02-20 03:39:55.178090 | orchestrator | Friday 20 February 2026 03:39:04 +0000 (0:00:02.206) 0:00:57.180 ******* 2026-02-20 03:39:55.178101 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:39:55.178114 | orchestrator | 2026-02-20 03:39:55.178127 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-20 03:39:55.178140 | orchestrator | Friday 20 February 2026 03:39:16 +0000 (0:00:12.471) 0:01:09.651 ******* 2026-02-20 03:39:55.178186 | orchestrator | 2026-02-20 03:39:55.178199 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-20 03:39:55.178212 | orchestrator | Friday 20 February 2026 03:39:16 +0000 (0:00:00.084) 0:01:09.735 ******* 2026-02-20 03:39:55.178225 | orchestrator | 2026-02-20 03:39:55.178237 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-20 03:39:55.178250 | orchestrator | Friday 20 February 2026 03:39:16 +0000 (0:00:00.100) 0:01:09.836 ******* 2026-02-20 03:39:55.178262 | orchestrator | 2026-02-20 03:39:55.178274 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-02-20 03:39:55.178287 | orchestrator | Friday 20 February 2026 03:39:16 +0000 (0:00:00.258) 0:01:10.094 ******* 2026-02-20 03:39:55.178299 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:39:55.178312 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:39:55.178324 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:39:55.178336 | orchestrator | 2026-02-20 03:39:55.178349 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-02-20 03:39:55.178361 | orchestrator | Friday 20 February 2026 03:39:28 +0000 (0:00:11.294) 0:01:21.389 ******* 2026-02-20 03:39:55.178374 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:39:55.178387 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:39:55.178400 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:39:55.178410 | orchestrator | 2026-02-20 03:39:55.178421 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-02-20 03:39:55.178432 | orchestrator | Friday 20 February 2026 03:39:38 +0000 (0:00:10.647) 0:01:32.037 ******* 2026-02-20 03:39:55.178443 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:39:55.178454 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:39:55.178465 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:39:55.178475 | orchestrator | 2026-02-20 03:39:55.178486 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-02-20 03:39:55.178497 | orchestrator | Friday 20 February 2026 03:39:49 +0000 (0:00:10.496) 0:01:42.533 ******* 2026-02-20 03:39:55.178508 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:39:55.178519 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:39:55.178529 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:39:55.178540 | orchestrator | 2026-02-20 03:39:55.178551 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:39:55.178564 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 03:39:55.178589 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-20 03:39:55.178600 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-20 03:39:55.178611 | orchestrator | 2026-02-20 03:39:55.178622 | orchestrator | 2026-02-20 03:39:55.178633 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:39:55.178644 | orchestrator | Friday 20 February 2026 03:39:54 +0000 (0:00:05.467) 0:01:48.000 ******* 2026-02-20 03:39:55.178655 | orchestrator | =============================================================================== 2026-02-20 03:39:55.178666 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 12.47s 2026-02-20 03:39:55.178677 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 11.29s 2026-02-20 03:39:55.178705 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 10.65s 2026-02-20 03:39:55.178716 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 10.50s 2026-02-20 03:39:55.178727 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.31s 2026-02-20 03:39:55.178738 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.35s 2026-02-20 03:39:55.178749 | orchestrator | aodh : Restart aodh-notifier container ---------------------------------- 5.47s 2026-02-20 03:39:55.178760 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.24s 2026-02-20 03:39:55.178771 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.18s 2026-02-20 03:39:55.178781 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.82s 2026-02-20 03:39:55.178792 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.78s 2026-02-20 03:39:55.178803 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.36s 2026-02-20 03:39:55.178814 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.33s 2026-02-20 03:39:55.178824 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.27s 2026-02-20 03:39:55.178836 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.17s 2026-02-20 03:39:55.178860 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.21s 2026-02-20 03:39:55.178872 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.12s 2026-02-20 03:39:55.178882 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.05s 2026-02-20 03:39:55.178893 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.78s 2026-02-20 03:39:55.178910 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 0.98s 2026-02-20 03:39:57.465408 | orchestrator | 2026-02-20 03:39:57 | INFO  | Task a9e72a06-436d-4078-8b88-ca6fff66456e (kolla-ceph-rgw) was prepared for execution. 2026-02-20 03:39:57.465511 | orchestrator | 2026-02-20 03:39:57 | INFO  | It takes a moment until task a9e72a06-436d-4078-8b88-ca6fff66456e (kolla-ceph-rgw) has been started and output is visible here. 2026-02-20 03:40:31.355009 | orchestrator | 2026-02-20 03:40:31.355128 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:40:31.355144 | orchestrator | 2026-02-20 03:40:31.355201 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:40:31.355213 | orchestrator | Friday 20 February 2026 03:40:01 +0000 (0:00:00.281) 0:00:00.281 ******* 2026-02-20 03:40:31.355225 | orchestrator | ok: [testbed-manager] 2026-02-20 03:40:31.355237 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:40:31.355248 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:40:31.355259 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:40:31.355295 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:40:31.355307 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:40:31.355318 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:40:31.355328 | orchestrator | 2026-02-20 03:40:31.355339 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:40:31.355350 | orchestrator | Friday 20 February 2026 03:40:02 +0000 (0:00:00.824) 0:00:01.106 ******* 2026-02-20 03:40:31.355361 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-20 03:40:31.355372 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-20 03:40:31.355384 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-20 03:40:31.355394 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-20 03:40:31.355405 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-20 03:40:31.355415 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-20 03:40:31.355426 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-20 03:40:31.355437 | orchestrator | 2026-02-20 03:40:31.355447 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-20 03:40:31.355460 | orchestrator | 2026-02-20 03:40:31.355471 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-20 03:40:31.355483 | orchestrator | Friday 20 February 2026 03:40:03 +0000 (0:00:00.696) 0:00:01.803 ******* 2026-02-20 03:40:31.355494 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 03:40:31.355506 | orchestrator | 2026-02-20 03:40:31.355517 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-20 03:40:31.355528 | orchestrator | Friday 20 February 2026 03:40:04 +0000 (0:00:01.463) 0:00:03.266 ******* 2026-02-20 03:40:31.355539 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-20 03:40:31.355552 | orchestrator | 2026-02-20 03:40:31.355564 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-20 03:40:31.355576 | orchestrator | Friday 20 February 2026 03:40:08 +0000 (0:00:03.730) 0:00:06.997 ******* 2026-02-20 03:40:31.355589 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-20 03:40:31.355603 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-20 03:40:31.355615 | orchestrator | 2026-02-20 03:40:31.355628 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-20 03:40:31.355640 | orchestrator | Friday 20 February 2026 03:40:14 +0000 (0:00:05.926) 0:00:12.923 ******* 2026-02-20 03:40:31.355652 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-20 03:40:31.355664 | orchestrator | 2026-02-20 03:40:31.355677 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-20 03:40:31.355690 | orchestrator | Friday 20 February 2026 03:40:17 +0000 (0:00:02.985) 0:00:15.909 ******* 2026-02-20 03:40:31.355701 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-20 03:40:31.355714 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-20 03:40:31.355726 | orchestrator | 2026-02-20 03:40:31.355739 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-20 03:40:31.355752 | orchestrator | Friday 20 February 2026 03:40:20 +0000 (0:00:03.579) 0:00:19.488 ******* 2026-02-20 03:40:31.355764 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-20 03:40:31.355777 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-20 03:40:31.355790 | orchestrator | 2026-02-20 03:40:31.355802 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-20 03:40:31.355814 | orchestrator | Friday 20 February 2026 03:40:26 +0000 (0:00:05.688) 0:00:25.177 ******* 2026-02-20 03:40:31.355827 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-20 03:40:31.355846 | orchestrator | 2026-02-20 03:40:31.355857 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:40:31.355869 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:40:31.355883 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:40:31.355901 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:40:31.355938 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:40:31.355958 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:40:31.355999 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:40:31.356018 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:40:31.356035 | orchestrator | 2026-02-20 03:40:31.356052 | orchestrator | 2026-02-20 03:40:31.356071 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:40:31.356090 | orchestrator | Friday 20 February 2026 03:40:30 +0000 (0:00:04.495) 0:00:29.673 ******* 2026-02-20 03:40:31.356110 | orchestrator | =============================================================================== 2026-02-20 03:40:31.356128 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.93s 2026-02-20 03:40:31.356145 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.69s 2026-02-20 03:40:31.356199 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.50s 2026-02-20 03:40:31.356218 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.73s 2026-02-20 03:40:31.356236 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.58s 2026-02-20 03:40:31.356255 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.99s 2026-02-20 03:40:31.356274 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.46s 2026-02-20 03:40:31.356292 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.82s 2026-02-20 03:40:31.356310 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2026-02-20 03:40:33.589553 | orchestrator | 2026-02-20 03:40:33 | INFO  | Task 54def079-b09d-45be-a5ee-e4fe3f3b3559 (gnocchi) was prepared for execution. 2026-02-20 03:40:33.589682 | orchestrator | 2026-02-20 03:40:33 | INFO  | It takes a moment until task 54def079-b09d-45be-a5ee-e4fe3f3b3559 (gnocchi) has been started and output is visible here. 2026-02-20 03:40:37.764617 | orchestrator | 2026-02-20 03:40:37.764720 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:40:37.764735 | orchestrator | 2026-02-20 03:40:37.764746 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:40:37.764756 | orchestrator | Friday 20 February 2026 03:40:37 +0000 (0:00:00.188) 0:00:00.188 ******* 2026-02-20 03:40:37.764767 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:40:37.764777 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:40:37.764787 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:40:37.764797 | orchestrator | 2026-02-20 03:40:37.764807 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:40:37.764817 | orchestrator | Friday 20 February 2026 03:40:37 +0000 (0:00:00.229) 0:00:00.417 ******* 2026-02-20 03:40:37.764827 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-02-20 03:40:37.764838 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-02-20 03:40:37.764872 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-02-20 03:40:37.764883 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-02-20 03:40:37.764893 | orchestrator | 2026-02-20 03:40:37.764903 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-02-20 03:40:37.764912 | orchestrator | skipping: no hosts matched 2026-02-20 03:40:37.764923 | orchestrator | 2026-02-20 03:40:37.764933 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:40:37.764943 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:40:37.764955 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:40:37.764964 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:40:37.764974 | orchestrator | 2026-02-20 03:40:37.764984 | orchestrator | 2026-02-20 03:40:37.764993 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:40:37.765003 | orchestrator | Friday 20 February 2026 03:40:37 +0000 (0:00:00.243) 0:00:00.661 ******* 2026-02-20 03:40:37.765013 | orchestrator | =============================================================================== 2026-02-20 03:40:37.765023 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.24s 2026-02-20 03:40:37.765033 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.23s 2026-02-20 03:40:39.645824 | orchestrator | 2026-02-20 03:40:39 | INFO  | Task 4c6e72d6-a35e-47d9-8d7a-f2031bf584fa (manila) was prepared for execution. 2026-02-20 03:40:39.645924 | orchestrator | 2026-02-20 03:40:39 | INFO  | It takes a moment until task 4c6e72d6-a35e-47d9-8d7a-f2031bf584fa (manila) has been started and output is visible here. 2026-02-20 03:41:19.006682 | orchestrator | 2026-02-20 03:41:19.006827 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:41:19.006853 | orchestrator | 2026-02-20 03:41:19.006893 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:41:19.006915 | orchestrator | Friday 20 February 2026 03:40:42 +0000 (0:00:00.187) 0:00:00.187 ******* 2026-02-20 03:41:19.006935 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:41:19.006955 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:41:19.006973 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:41:19.006990 | orchestrator | 2026-02-20 03:41:19.007009 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:41:19.007028 | orchestrator | Friday 20 February 2026 03:40:43 +0000 (0:00:00.278) 0:00:00.466 ******* 2026-02-20 03:41:19.007047 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-02-20 03:41:19.007067 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-02-20 03:41:19.007086 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-02-20 03:41:19.007104 | orchestrator | 2026-02-20 03:41:19.007122 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-02-20 03:41:19.007141 | orchestrator | 2026-02-20 03:41:19.007219 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-20 03:41:19.007243 | orchestrator | Friday 20 February 2026 03:40:43 +0000 (0:00:00.286) 0:00:00.753 ******* 2026-02-20 03:41:19.007266 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:41:19.007289 | orchestrator | 2026-02-20 03:41:19.007311 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-20 03:41:19.007334 | orchestrator | Friday 20 February 2026 03:40:43 +0000 (0:00:00.413) 0:00:01.166 ******* 2026-02-20 03:41:19.007356 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:41:19.007379 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:41:19.007436 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:41:19.007458 | orchestrator | 2026-02-20 03:41:19.007479 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-02-20 03:41:19.007497 | orchestrator | Friday 20 February 2026 03:40:44 +0000 (0:00:00.312) 0:00:01.479 ******* 2026-02-20 03:41:19.007516 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-02-20 03:41:19.007537 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-02-20 03:41:19.007557 | orchestrator | 2026-02-20 03:41:19.007577 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-02-20 03:41:19.007597 | orchestrator | Friday 20 February 2026 03:40:50 +0000 (0:00:06.291) 0:00:07.770 ******* 2026-02-20 03:41:19.007617 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-02-20 03:41:19.007638 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-02-20 03:41:19.007659 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-02-20 03:41:19.007679 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-02-20 03:41:19.007699 | orchestrator | 2026-02-20 03:41:19.007718 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-02-20 03:41:19.007737 | orchestrator | Friday 20 February 2026 03:41:02 +0000 (0:00:12.383) 0:00:20.154 ******* 2026-02-20 03:41:19.007755 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-20 03:41:19.007773 | orchestrator | 2026-02-20 03:41:19.007791 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-02-20 03:41:19.007810 | orchestrator | Friday 20 February 2026 03:41:06 +0000 (0:00:03.158) 0:00:23.312 ******* 2026-02-20 03:41:19.007826 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-20 03:41:19.007844 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-02-20 03:41:19.007862 | orchestrator | 2026-02-20 03:41:19.007880 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-02-20 03:41:19.007898 | orchestrator | Friday 20 February 2026 03:41:09 +0000 (0:00:03.804) 0:00:27.116 ******* 2026-02-20 03:41:19.007916 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-20 03:41:19.007933 | orchestrator | 2026-02-20 03:41:19.007951 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-02-20 03:41:19.007970 | orchestrator | Friday 20 February 2026 03:41:13 +0000 (0:00:03.203) 0:00:30.320 ******* 2026-02-20 03:41:19.007987 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-02-20 03:41:19.008005 | orchestrator | 2026-02-20 03:41:19.008023 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-02-20 03:41:19.008042 | orchestrator | Friday 20 February 2026 03:41:16 +0000 (0:00:03.629) 0:00:33.950 ******* 2026-02-20 03:41:19.008095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 03:41:19.008136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 03:41:19.008225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 03:41:19.008248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:19.008270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:19.008290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:19.008335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:29.509319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:29.509431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:29.509444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:29.509453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:29.509473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:29.509481 | orchestrator | 2026-02-20 03:41:29.509490 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-20 03:41:29.509499 | orchestrator | Friday 20 February 2026 03:41:19 +0000 (0:00:02.351) 0:00:36.301 ******* 2026-02-20 03:41:29.509507 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:41:29.509515 | orchestrator | 2026-02-20 03:41:29.509577 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-02-20 03:41:29.509588 | orchestrator | Friday 20 February 2026 03:41:19 +0000 (0:00:00.517) 0:00:36.819 ******* 2026-02-20 03:41:29.509614 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:41:29.509622 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:41:29.509630 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:41:29.509637 | orchestrator | 2026-02-20 03:41:29.509645 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-02-20 03:41:29.509652 | orchestrator | Friday 20 February 2026 03:41:20 +0000 (0:00:00.950) 0:00:37.770 ******* 2026-02-20 03:41:29.509672 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-20 03:41:29.509695 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-20 03:41:29.509703 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-20 03:41:29.509711 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-20 03:41:29.509718 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-20 03:41:29.509725 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-20 03:41:29.509732 | orchestrator | 2026-02-20 03:41:29.509739 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-02-20 03:41:29.509746 | orchestrator | Friday 20 February 2026 03:41:22 +0000 (0:00:01.729) 0:00:39.500 ******* 2026-02-20 03:41:29.509754 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-20 03:41:29.509761 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-20 03:41:29.509768 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-20 03:41:29.509775 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-20 03:41:29.509782 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-20 03:41:29.509789 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-20 03:41:29.509797 | orchestrator | 2026-02-20 03:41:29.509804 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-02-20 03:41:29.509811 | orchestrator | Friday 20 February 2026 03:41:23 +0000 (0:00:01.236) 0:00:40.736 ******* 2026-02-20 03:41:29.509819 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-02-20 03:41:29.509827 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-02-20 03:41:29.509834 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-02-20 03:41:29.509841 | orchestrator | 2026-02-20 03:41:29.509848 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-02-20 03:41:29.509855 | orchestrator | Friday 20 February 2026 03:41:24 +0000 (0:00:00.676) 0:00:41.412 ******* 2026-02-20 03:41:29.509862 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:41:29.509877 | orchestrator | 2026-02-20 03:41:29.509885 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-02-20 03:41:29.509894 | orchestrator | Friday 20 February 2026 03:41:24 +0000 (0:00:00.145) 0:00:41.557 ******* 2026-02-20 03:41:29.509902 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:41:29.509910 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:41:29.509918 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:41:29.509926 | orchestrator | 2026-02-20 03:41:29.509934 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-20 03:41:29.509942 | orchestrator | Friday 20 February 2026 03:41:24 +0000 (0:00:00.462) 0:00:42.020 ******* 2026-02-20 03:41:29.509950 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:41:29.509959 | orchestrator | 2026-02-20 03:41:29.509967 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-02-20 03:41:29.509975 | orchestrator | Friday 20 February 2026 03:41:25 +0000 (0:00:00.543) 0:00:42.563 ******* 2026-02-20 03:41:29.509993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 03:41:30.329770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 03:41:30.329885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 03:41:30.329903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:30.329942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:30.329954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:30.330011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:30.330089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:30.330102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:30.330113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:30.330133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:30.330229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:30.330246 | orchestrator | 2026-02-20 03:41:30.330260 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-02-20 03:41:30.330273 | orchestrator | Friday 20 February 2026 03:41:29 +0000 (0:00:04.271) 0:00:46.834 ******* 2026-02-20 03:41:30.330303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-20 03:41:30.933050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:41:30.933134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 03:41:30.933191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 03:41:30.933242 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:41:30.933256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-20 03:41:30.933269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:41:30.933297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 03:41:30.933330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 03:41:30.933342 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:41:30.933354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-20 03:41:30.933376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:41:30.933389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 03:41:30.933402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 03:41:30.933415 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:41:30.933426 | orchestrator | 2026-02-20 03:41:30.933435 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-02-20 03:41:30.933448 | orchestrator | Friday 20 February 2026 03:41:30 +0000 (0:00:00.816) 0:00:47.651 ******* 2026-02-20 03:41:30.933463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-20 03:41:35.513106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:41:35.513336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 03:41:35.513368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 03:41:35.513388 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:41:35.513412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-20 03:41:35.513450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:41:35.513471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 03:41:35.513518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 03:41:35.513552 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:41:35.513574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-20 03:41:35.513596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:41:35.513616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 03:41:35.513646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 03:41:35.513668 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:41:35.513689 | orchestrator | 2026-02-20 03:41:35.513710 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-02-20 03:41:35.513733 | orchestrator | Friday 20 February 2026 03:41:31 +0000 (0:00:00.809) 0:00:48.460 ******* 2026-02-20 03:41:35.513770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 03:41:42.088229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 03:41:42.088343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 03:41:42.088359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:42.088391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:42.088403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:42.088432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:42.088466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:42.088478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:42.088489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:42.088501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:42.088517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:42.088529 | orchestrator | 2026-02-20 03:41:42.088542 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-02-20 03:41:42.088555 | orchestrator | Friday 20 February 2026 03:41:35 +0000 (0:00:04.522) 0:00:52.983 ******* 2026-02-20 03:41:42.088582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 03:41:46.156617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 03:41:46.156729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 03:41:46.156744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:46.156772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 03:41:46.156784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:46.156870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 03:41:46.156883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:46.156894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 03:41:46.156904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:46.156914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:46.156929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-20 03:41:46.156949 | orchestrator | 2026-02-20 03:41:46.156960 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-02-20 03:41:46.156972 | orchestrator | Friday 20 February 2026 03:41:42 +0000 (0:00:06.428) 0:00:59.412 ******* 2026-02-20 03:41:46.156982 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-02-20 03:41:46.156992 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-02-20 03:41:46.157002 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-02-20 03:41:46.157011 | orchestrator | 2026-02-20 03:41:46.157048 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-02-20 03:41:46.157059 | orchestrator | Friday 20 February 2026 03:41:45 +0000 (0:00:03.461) 0:01:02.874 ******* 2026-02-20 03:41:46.157078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-20 03:41:49.490599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:41:49.490741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 03:41:49.490769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 03:41:49.490823 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:41:49.490864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-20 03:41:49.490884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:41:49.490905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 03:41:49.490939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 03:41:49.490952 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:41:49.490963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-20 03:41:49.490975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 03:41:49.491000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 03:41:49.491012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 03:41:49.491023 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:41:49.491035 | orchestrator | 2026-02-20 03:41:49.491046 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-02-20 03:41:49.491059 | orchestrator | Friday 20 February 2026 03:41:46 +0000 (0:00:00.605) 0:01:03.479 ******* 2026-02-20 03:41:49.491080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 03:42:31.426319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 03:42:31.426501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-20 03:42:31.426576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:42:31.426593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:42:31.426605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-20 03:42:31.426635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-20 03:42:31.426650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-20 03:42:31.426661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-20 03:42:31.426685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-20 03:42:31.426697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-20 03:42:31.426709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-20 03:42:31.426720 | orchestrator | 2026-02-20 03:42:31.426734 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-02-20 03:42:31.426748 | orchestrator | Friday 20 February 2026 03:41:49 +0000 (0:00:03.328) 0:01:06.808 ******* 2026-02-20 03:42:31.426761 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:42:31.426774 | orchestrator | 2026-02-20 03:42:31.426787 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-02-20 03:42:31.426799 | orchestrator | Friday 20 February 2026 03:41:51 +0000 (0:00:02.076) 0:01:08.884 ******* 2026-02-20 03:42:31.426811 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:42:31.426823 | orchestrator | 2026-02-20 03:42:31.426836 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-02-20 03:42:31.426849 | orchestrator | Friday 20 February 2026 03:41:53 +0000 (0:00:02.212) 0:01:11.097 ******* 2026-02-20 03:42:31.426861 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:42:31.426873 | orchestrator | 2026-02-20 03:42:31.426885 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-20 03:42:31.426897 | orchestrator | Friday 20 February 2026 03:42:31 +0000 (0:00:37.330) 0:01:48.428 ******* 2026-02-20 03:42:31.426909 | orchestrator | 2026-02-20 03:42:31.426929 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-20 03:43:21.258353 | orchestrator | Friday 20 February 2026 03:42:31 +0000 (0:00:00.070) 0:01:48.498 ******* 2026-02-20 03:43:21.258549 | orchestrator | 2026-02-20 03:43:21.258570 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-20 03:43:21.258583 | orchestrator | Friday 20 February 2026 03:42:31 +0000 (0:00:00.068) 0:01:48.567 ******* 2026-02-20 03:43:21.258622 | orchestrator | 2026-02-20 03:43:21.258635 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-02-20 03:43:21.258646 | orchestrator | Friday 20 February 2026 03:42:31 +0000 (0:00:00.070) 0:01:48.637 ******* 2026-02-20 03:43:21.258657 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:43:21.258668 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:43:21.258680 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:43:21.258690 | orchestrator | 2026-02-20 03:43:21.258702 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-02-20 03:43:21.258713 | orchestrator | Friday 20 February 2026 03:42:46 +0000 (0:00:14.827) 0:02:03.465 ******* 2026-02-20 03:43:21.258724 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:43:21.258735 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:43:21.258745 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:43:21.258756 | orchestrator | 2026-02-20 03:43:21.258767 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-02-20 03:43:21.258778 | orchestrator | Friday 20 February 2026 03:42:52 +0000 (0:00:06.337) 0:02:09.802 ******* 2026-02-20 03:43:21.258789 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:43:21.258800 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:43:21.258811 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:43:21.258822 | orchestrator | 2026-02-20 03:43:21.258833 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-02-20 03:43:21.258844 | orchestrator | Friday 20 February 2026 03:43:03 +0000 (0:00:10.455) 0:02:20.258 ******* 2026-02-20 03:43:21.258855 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:43:21.258865 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:43:21.258876 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:43:21.258887 | orchestrator | 2026-02-20 03:43:21.258898 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:43:21.258910 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 03:43:21.258939 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-20 03:43:21.258950 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-20 03:43:21.258962 | orchestrator | 2026-02-20 03:43:21.258973 | orchestrator | 2026-02-20 03:43:21.258984 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:43:21.258995 | orchestrator | Friday 20 February 2026 03:43:20 +0000 (0:00:17.863) 0:02:38.122 ******* 2026-02-20 03:43:21.259006 | orchestrator | =============================================================================== 2026-02-20 03:43:21.259017 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 37.33s 2026-02-20 03:43:21.259028 | orchestrator | manila : Restart manila-share container -------------------------------- 17.86s 2026-02-20 03:43:21.259039 | orchestrator | manila : Restart manila-api container ---------------------------------- 14.83s 2026-02-20 03:43:21.259050 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 12.38s 2026-02-20 03:43:21.259061 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.46s 2026-02-20 03:43:21.259071 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.43s 2026-02-20 03:43:21.259082 | orchestrator | manila : Restart manila-data container ---------------------------------- 6.34s 2026-02-20 03:43:21.259094 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.29s 2026-02-20 03:43:21.259105 | orchestrator | manila : Copying over config.json files for services -------------------- 4.52s 2026-02-20 03:43:21.259116 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.27s 2026-02-20 03:43:21.259127 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.80s 2026-02-20 03:43:21.259169 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.63s 2026-02-20 03:43:21.259181 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.46s 2026-02-20 03:43:21.259192 | orchestrator | manila : Check manila containers ---------------------------------------- 3.33s 2026-02-20 03:43:21.259203 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.20s 2026-02-20 03:43:21.259214 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.16s 2026-02-20 03:43:21.259225 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.35s 2026-02-20 03:43:21.259236 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.21s 2026-02-20 03:43:21.259247 | orchestrator | manila : Creating Manila database --------------------------------------- 2.08s 2026-02-20 03:43:21.259258 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.73s 2026-02-20 03:43:21.510596 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-02-20 03:43:33.622887 | orchestrator | 2026-02-20 03:43:33 | INFO  | Task 03deca34-1816-4f46-9e4a-17fcc852ff98 (netdata) was prepared for execution. 2026-02-20 03:43:33.623013 | orchestrator | 2026-02-20 03:43:33 | INFO  | It takes a moment until task 03deca34-1816-4f46-9e4a-17fcc852ff98 (netdata) has been started and output is visible here. 2026-02-20 03:45:08.248717 | orchestrator | 2026-02-20 03:45:08.248837 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:45:08.248853 | orchestrator | 2026-02-20 03:45:08.248865 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:45:08.248876 | orchestrator | Friday 20 February 2026 03:43:37 +0000 (0:00:00.173) 0:00:00.173 ******* 2026-02-20 03:45:08.248888 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-20 03:45:08.248899 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-20 03:45:08.248910 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-20 03:45:08.248921 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-20 03:45:08.248932 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-20 03:45:08.248942 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-20 03:45:08.248953 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-20 03:45:08.248963 | orchestrator | 2026-02-20 03:45:08.248974 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-20 03:45:08.248985 | orchestrator | 2026-02-20 03:45:08.248996 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-20 03:45:08.249007 | orchestrator | Friday 20 February 2026 03:43:38 +0000 (0:00:00.671) 0:00:00.845 ******* 2026-02-20 03:45:08.249020 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 03:45:08.249032 | orchestrator | 2026-02-20 03:45:08.249044 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-20 03:45:08.249054 | orchestrator | Friday 20 February 2026 03:43:39 +0000 (0:00:00.924) 0:00:01.769 ******* 2026-02-20 03:45:08.249065 | orchestrator | ok: [testbed-manager] 2026-02-20 03:45:08.249078 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:45:08.249088 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:45:08.249099 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:45:08.249109 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:45:08.249120 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:45:08.249131 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:45:08.249142 | orchestrator | 2026-02-20 03:45:08.249153 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-20 03:45:08.249213 | orchestrator | Friday 20 February 2026 03:43:40 +0000 (0:00:01.534) 0:00:03.304 ******* 2026-02-20 03:45:08.249250 | orchestrator | ok: [testbed-manager] 2026-02-20 03:45:08.249263 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:45:08.249276 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:45:08.249288 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:45:08.249301 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:45:08.249314 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:45:08.249326 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:45:08.249339 | orchestrator | 2026-02-20 03:45:08.249351 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-20 03:45:08.249364 | orchestrator | Friday 20 February 2026 03:43:42 +0000 (0:00:01.955) 0:00:05.259 ******* 2026-02-20 03:45:08.249376 | orchestrator | changed: [testbed-manager] 2026-02-20 03:45:08.249389 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:45:08.249401 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:45:08.249414 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:45:08.249426 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:45:08.249438 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:45:08.249451 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:45:08.249463 | orchestrator | 2026-02-20 03:45:08.249476 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-20 03:45:08.249489 | orchestrator | Friday 20 February 2026 03:43:44 +0000 (0:00:01.380) 0:00:06.640 ******* 2026-02-20 03:45:08.249502 | orchestrator | changed: [testbed-manager] 2026-02-20 03:45:08.249512 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:45:08.249523 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:45:08.249533 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:45:08.249544 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:45:08.249554 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:45:08.249565 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:45:08.249575 | orchestrator | 2026-02-20 03:45:08.249586 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-20 03:45:08.249597 | orchestrator | Friday 20 February 2026 03:44:02 +0000 (0:00:18.214) 0:00:24.855 ******* 2026-02-20 03:45:08.249607 | orchestrator | changed: [testbed-manager] 2026-02-20 03:45:08.249618 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:45:08.249629 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:45:08.249639 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:45:08.249650 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:45:08.249660 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:45:08.249671 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:45:08.249682 | orchestrator | 2026-02-20 03:45:08.249693 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-20 03:45:08.249704 | orchestrator | Friday 20 February 2026 03:44:43 +0000 (0:00:41.488) 0:01:06.343 ******* 2026-02-20 03:45:08.249716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 03:45:08.249728 | orchestrator | 2026-02-20 03:45:08.249739 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-20 03:45:08.249749 | orchestrator | Friday 20 February 2026 03:44:45 +0000 (0:00:01.509) 0:01:07.853 ******* 2026-02-20 03:45:08.249760 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-20 03:45:08.249771 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-20 03:45:08.249782 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-20 03:45:08.249793 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-20 03:45:08.249822 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-20 03:45:08.249834 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-20 03:45:08.249845 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-20 03:45:08.249855 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-20 03:45:08.249874 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-20 03:45:08.249884 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-20 03:45:08.249895 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-20 03:45:08.249906 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-20 03:45:08.249916 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-20 03:45:08.249927 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-20 03:45:08.249937 | orchestrator | 2026-02-20 03:45:08.249948 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-20 03:45:08.249960 | orchestrator | Friday 20 February 2026 03:44:48 +0000 (0:00:03.420) 0:01:11.274 ******* 2026-02-20 03:45:08.249971 | orchestrator | ok: [testbed-manager] 2026-02-20 03:45:08.249981 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:45:08.249992 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:45:08.250003 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:45:08.250013 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:45:08.250087 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:45:08.250097 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:45:08.250108 | orchestrator | 2026-02-20 03:45:08.250119 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-20 03:45:08.250130 | orchestrator | Friday 20 February 2026 03:44:49 +0000 (0:00:01.193) 0:01:12.467 ******* 2026-02-20 03:45:08.250141 | orchestrator | changed: [testbed-manager] 2026-02-20 03:45:08.250151 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:45:08.250162 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:45:08.250203 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:45:08.250219 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:45:08.250230 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:45:08.250241 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:45:08.250251 | orchestrator | 2026-02-20 03:45:08.250262 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-20 03:45:08.250273 | orchestrator | Friday 20 February 2026 03:44:51 +0000 (0:00:01.221) 0:01:13.688 ******* 2026-02-20 03:45:08.250283 | orchestrator | ok: [testbed-manager] 2026-02-20 03:45:08.250294 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:45:08.250305 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:45:08.250315 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:45:08.250332 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:45:08.250343 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:45:08.250354 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:45:08.250364 | orchestrator | 2026-02-20 03:45:08.250375 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-20 03:45:08.250386 | orchestrator | Friday 20 February 2026 03:44:52 +0000 (0:00:01.165) 0:01:14.854 ******* 2026-02-20 03:45:08.250397 | orchestrator | ok: [testbed-manager] 2026-02-20 03:45:08.250407 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:45:08.250417 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:45:08.250428 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:45:08.250439 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:45:08.250449 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:45:08.250460 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:45:08.250470 | orchestrator | 2026-02-20 03:45:08.250481 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-20 03:45:08.250492 | orchestrator | Friday 20 February 2026 03:44:53 +0000 (0:00:01.539) 0:01:16.394 ******* 2026-02-20 03:45:08.250503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-20 03:45:08.250516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 03:45:08.250527 | orchestrator | 2026-02-20 03:45:08.250538 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-20 03:45:08.250556 | orchestrator | Friday 20 February 2026 03:44:55 +0000 (0:00:01.350) 0:01:17.744 ******* 2026-02-20 03:45:08.250567 | orchestrator | changed: [testbed-manager] 2026-02-20 03:45:08.250578 | orchestrator | 2026-02-20 03:45:08.250589 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-20 03:45:08.250599 | orchestrator | Friday 20 February 2026 03:44:57 +0000 (0:00:01.967) 0:01:19.712 ******* 2026-02-20 03:45:08.250610 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:45:08.250621 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:45:08.250632 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:45:08.250642 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:45:08.250653 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:45:08.250663 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:45:08.250674 | orchestrator | changed: [testbed-manager] 2026-02-20 03:45:08.250685 | orchestrator | 2026-02-20 03:45:08.250696 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:45:08.250706 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:45:08.250718 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:45:08.250729 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:45:08.250740 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:45:08.250759 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:45:08.603329 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:45:08.603439 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:45:08.603461 | orchestrator | 2026-02-20 03:45:08.603475 | orchestrator | 2026-02-20 03:45:08.603487 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:45:08.603500 | orchestrator | Friday 20 February 2026 03:45:08 +0000 (0:00:11.097) 0:01:30.809 ******* 2026-02-20 03:45:08.603511 | orchestrator | =============================================================================== 2026-02-20 03:45:08.603522 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 41.49s 2026-02-20 03:45:08.603533 | orchestrator | osism.services.netdata : Add repository -------------------------------- 18.21s 2026-02-20 03:45:08.603544 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.10s 2026-02-20 03:45:08.603555 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.42s 2026-02-20 03:45:08.603573 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.97s 2026-02-20 03:45:08.603591 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 1.96s 2026-02-20 03:45:08.603609 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.54s 2026-02-20 03:45:08.603627 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.53s 2026-02-20 03:45:08.603645 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.51s 2026-02-20 03:45:08.603662 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.38s 2026-02-20 03:45:08.603680 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.35s 2026-02-20 03:45:08.603697 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.22s 2026-02-20 03:45:08.603716 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.19s 2026-02-20 03:45:08.603787 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.17s 2026-02-20 03:45:08.603808 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 0.92s 2026-02-20 03:45:08.603827 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2026-02-20 03:45:12.294736 | orchestrator | 2026-02-20 03:45:12 | INFO  | Task 1d4cd53c-4f9d-4ff0-9fef-a3e3643f6de3 (prometheus) was prepared for execution. 2026-02-20 03:45:12.294827 | orchestrator | 2026-02-20 03:45:12 | INFO  | It takes a moment until task 1d4cd53c-4f9d-4ff0-9fef-a3e3643f6de3 (prometheus) has been started and output is visible here. 2026-02-20 03:45:20.366726 | orchestrator | 2026-02-20 03:45:20.366826 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:45:20.366836 | orchestrator | 2026-02-20 03:45:20.366844 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:45:20.366851 | orchestrator | Friday 20 February 2026 03:45:15 +0000 (0:00:00.246) 0:00:00.246 ******* 2026-02-20 03:45:20.366857 | orchestrator | ok: [testbed-manager] 2026-02-20 03:45:20.366864 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:45:20.366871 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:45:20.366877 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:45:20.366883 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:45:20.366889 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:45:20.366895 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:45:20.366900 | orchestrator | 2026-02-20 03:45:20.366907 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:45:20.366913 | orchestrator | Friday 20 February 2026 03:45:16 +0000 (0:00:00.712) 0:00:00.959 ******* 2026-02-20 03:45:20.366919 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-20 03:45:20.366925 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-20 03:45:20.366931 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-20 03:45:20.366937 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-20 03:45:20.366943 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-20 03:45:20.366949 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-20 03:45:20.366954 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-20 03:45:20.366960 | orchestrator | 2026-02-20 03:45:20.366966 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-20 03:45:20.366972 | orchestrator | 2026-02-20 03:45:20.366978 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-20 03:45:20.366984 | orchestrator | Friday 20 February 2026 03:45:17 +0000 (0:00:00.721) 0:00:01.680 ******* 2026-02-20 03:45:20.366990 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 03:45:20.366998 | orchestrator | 2026-02-20 03:45:20.367004 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-20 03:45:20.367010 | orchestrator | Friday 20 February 2026 03:45:18 +0000 (0:00:01.183) 0:00:02.863 ******* 2026-02-20 03:45:20.367019 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-20 03:45:20.367049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:20.367057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:20.367075 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:20.367095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:20.367102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:20.367109 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:20.367115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:20.367121 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:20.367133 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:20.367143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:20.367153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:21.390532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:21.390613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:21.390625 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:21.390634 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-20 03:45:21.390662 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:21.390680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:21.390700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:21.390708 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-20 03:45:21.390715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:21.390722 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:21.390737 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:21.390744 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-20 03:45:21.390750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:21.390761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:21.390773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:25.963624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-20 03:45:25.963737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:25.963753 | orchestrator | 2026-02-20 03:45:25.963766 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-20 03:45:25.963778 | orchestrator | Friday 20 February 2026 03:45:21 +0000 (0:00:02.772) 0:00:05.636 ******* 2026-02-20 03:45:25.963837 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 03:45:25.963851 | orchestrator | 2026-02-20 03:45:25.963861 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-20 03:45:25.963871 | orchestrator | Friday 20 February 2026 03:45:22 +0000 (0:00:01.503) 0:00:07.140 ******* 2026-02-20 03:45:25.963883 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-20 03:45:25.963896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:25.963920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:25.963931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:25.963957 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:25.963969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:25.963987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:25.963997 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:25.964007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:25.964018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:25.964032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:25.964044 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:25.964062 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:28.139790 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:28.139890 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:28.139899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:28.139905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:28.139910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:28.139927 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-20 03:45:28.139934 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-20 03:45:28.139952 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-20 03:45:28.139969 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-20 03:45:28.139979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:28.139988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:28.140001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:28.140006 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:28.140010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:28.140024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:29.170588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:29.170693 | orchestrator | 2026-02-20 03:45:29.170708 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-20 03:45:29.170721 | orchestrator | Friday 20 February 2026 03:45:28 +0000 (0:00:05.248) 0:00:12.389 ******* 2026-02-20 03:45:29.170735 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-20 03:45:29.170749 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 03:45:29.170761 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 03:45:29.170826 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-20 03:45:29.170883 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:29.170898 | orchestrator | skipping: [testbed-manager] 2026-02-20 03:45:29.170911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 03:45:29.170923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:29.170935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:29.170947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 03:45:29.170964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:29.170975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 03:45:29.170994 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:45:29.171005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:29.171025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:29.746657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 03:45:29.746763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:29.746780 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:45:29.746795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 03:45:29.746819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:29.746849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:29.746886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 03:45:29.746899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:29.746933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 03:45:29.746955 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:45:29.746974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 03:45:29.746993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 03:45:29.747011 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:45:29.747030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 03:45:29.747058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 03:45:29.747092 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 03:45:29.747105 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:45:29.747117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 03:45:29.747138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 03:45:30.825383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 03:45:30.825494 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:45:30.825511 | orchestrator | 2026-02-20 03:45:30.825524 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-20 03:45:30.825536 | orchestrator | Friday 20 February 2026 03:45:29 +0000 (0:00:01.602) 0:00:13.991 ******* 2026-02-20 03:45:30.825548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 03:45:30.825561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:30.825591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:30.825628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 03:45:30.825640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:30.825652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 03:45:30.825684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:30.825697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 03:45:30.825708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:30.825720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:30.825745 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-20 03:45:30.825758 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 03:45:30.825770 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 03:45:30.825792 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-20 03:45:31.920262 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:31.920401 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:45:31.920420 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:45:31.920432 | orchestrator | skipping: [testbed-manager] 2026-02-20 03:45:31.920446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 03:45:31.920532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:31.920547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:31.920560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 03:45:31.920573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 03:45:31.920585 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:45:31.920597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 03:45:31.920630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 03:45:31.920643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 03:45:31.920664 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:45:31.920676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 03:45:31.920694 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 03:45:31.920707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 03:45:31.920721 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:45:31.920734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 03:45:31.920748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 03:45:31.920771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 03:45:35.271909 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:45:35.272006 | orchestrator | 2026-02-20 03:45:35.272022 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-20 03:45:35.272036 | orchestrator | Friday 20 February 2026 03:45:31 +0000 (0:00:02.166) 0:00:16.158 ******* 2026-02-20 03:45:35.272075 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-20 03:45:35.272109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:35.272125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:35.272137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:35.272150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:35.272162 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:35.272262 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:35.272280 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:45:35.272288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:35.272300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:35.272308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:35.272315 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:35.272324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:35.272331 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:35.272344 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:38.740243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:38.740350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:38.740386 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-20 03:45:38.740417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:38.740439 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-20 03:45:38.740457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-20 03:45:38.740501 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-20 03:45:38.740543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:38.740564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:38.740590 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:38.740608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:45:38.740625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:38.740643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:38.740673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:45:38.740692 | orchestrator | 2026-02-20 03:45:38.740710 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-20 03:45:38.740729 | orchestrator | Friday 20 February 2026 03:45:37 +0000 (0:00:05.984) 0:00:22.142 ******* 2026-02-20 03:45:38.740745 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 03:45:38.740761 | orchestrator | 2026-02-20 03:45:38.740778 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-20 03:45:38.740806 | orchestrator | Friday 20 February 2026 03:45:38 +0000 (0:00:00.852) 0:00:22.994 ******* 2026-02-20 03:45:41.711543 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1331554, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8435588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:41.711664 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1331554, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8435588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:41.711681 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1331554, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8435588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:41.711694 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1331701, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8746152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:41.711707 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1331554, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8435588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:45:41.711738 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1331554, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8435588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:41.711767 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1331554, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8435588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:41.711780 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1331554, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8435588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:41.711796 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1331538, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8427093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:41.711839 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1331701, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8746152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:41.711857 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1331701, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8746152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:41.711877 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1331701, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8746152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:41.711908 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1331701, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8746152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:41.711929 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1331691, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8730173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:41.711957 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1331701, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8746152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:42.787418 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1331538, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8427093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:42.787524 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1331538, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8427093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:42.787552 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1331533, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8402412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:42.787569 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1331538, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8427093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:42.787610 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1331538, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8427093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:42.787625 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1331678, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.869358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:42.787639 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1331691, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8730173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:42.787677 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1331701, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8746152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:45:42.787702 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1331538, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8427093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:42.787717 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1331691, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8730173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:42.787741 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1331691, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8730173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:42.787758 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1331689, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8716955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:42.787773 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1331691, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8730173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:42.787790 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1331533, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8402412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:42.787819 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1331691, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8730173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:43.941398 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1331533, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8402412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:43.941530 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1331533, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8402412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:43.941593 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1331682, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.869358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:43.941614 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1331533, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8402412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:43.941626 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1331533, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8402412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:43.941637 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1331678, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.869358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:43.941649 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1331678, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.869358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:43.941690 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1331678, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.869358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:43.941703 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1331552, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8428907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:43.941724 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1331678, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.869358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:43.941736 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1331689, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8716955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:43.941747 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1331689, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8716955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:43.941759 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1331678, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.869358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:43.941771 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331698, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.874029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:43.941797 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1331538, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8427093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:45:45.117481 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1331689, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8716955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:45.117604 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1331689, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8716955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:45.117620 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1331689, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8716955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:45.117631 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1331682, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.869358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:45.117641 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1331682, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.869358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:45.117652 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331524, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8383574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:45.117678 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1331682, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.869358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:45.117714 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1331552, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8428907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:45.117725 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1331682, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.869358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:45.117735 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1331717, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.877973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:45.117745 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1331552, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8428907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:45.117756 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1331552, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8428907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:45.117765 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1331682, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.869358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:45.117781 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1331697, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8736703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:45.117805 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1331691, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8730173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:45:46.348716 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331698, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.874029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:46.348821 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331535, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8405814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:46.348838 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1331552, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8428907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:46.348851 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331698, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.874029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:46.348862 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1331526, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8392966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:46.348890 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1331552, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8428907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:46.348925 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331698, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.874029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:46.348954 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331524, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8383574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:46.348967 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331698, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.874029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:46.348979 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331698, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.874029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:46.348990 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1331533, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8402412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:45:46.349001 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331524, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8383574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:46.349025 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331524, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8383574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:46.349038 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1331686, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8713253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:46.349057 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1331717, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.877973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:47.466680 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1331717, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.877973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:47.466800 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1331717, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.877973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:47.466817 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331524, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8383574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:47.466830 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331524, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8383574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:47.466889 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1331697, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8736703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:47.466904 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1331684, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8707373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:47.466923 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1331697, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8736703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:47.466963 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1331717, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.877973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:47.466986 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1331717, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.877973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:47.467005 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1331697, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8736703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:47.467025 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331535, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8405814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:47.467065 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331535, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8405814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:47.467087 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1331713, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.877756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:47.467109 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:45:47.467128 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1331678, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.869358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:45:47.467149 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1331526, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8392966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:48.737014 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1331697, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8736703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:48.737124 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1331697, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8736703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:48.737227 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331535, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8405814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:48.737257 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331535, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8405814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:48.737269 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1331526, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8392966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:48.737281 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1331686, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8713253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:48.737292 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1331526, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8392966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:48.737322 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331535, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8405814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:48.737334 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1331684, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8707373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:48.737354 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1331689, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8716955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:45:48.737371 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1331526, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8392966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:48.737382 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1331526, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8392966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:48.737394 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1331686, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8713253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:48.737405 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1331686, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8713253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:48.737425 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1331713, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.877756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:54.592416 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:45:54.592541 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1331684, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8707373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:54.592590 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1331684, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8707373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:54.592631 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1331686, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8713253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:54.592647 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1331686, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8713253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:54.592660 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1331713, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.877756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:54.592673 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:45:54.592685 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1331713, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.877756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:54.592702 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:45:54.592734 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1331684, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8707373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:54.592759 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1331684, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8707373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:54.592772 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1331713, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.877756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:54.592784 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:45:54.592802 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1331682, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.869358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:45:54.592815 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1331713, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.877756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-20 03:45:54.592827 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:45:54.592839 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1331552, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8428907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:45:54.592852 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331698, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.874029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:45:54.592872 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331524, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8383574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:46:16.108814 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1331717, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.877973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:46:16.108935 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1331697, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8736703, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:46:16.108972 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331535, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8405814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:46:16.108987 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1331526, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8392966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:46:16.108999 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1331686, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8713253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:46:16.109010 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1331684, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8707373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:46:16.109022 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1331713, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.877756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-20 03:46:16.109060 | orchestrator | 2026-02-20 03:46:16.109092 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-20 03:46:16.109105 | orchestrator | Friday 20 February 2026 03:46:00 +0000 (0:00:21.320) 0:00:44.315 ******* 2026-02-20 03:46:16.109116 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 03:46:16.109128 | orchestrator | 2026-02-20 03:46:16.109139 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-20 03:46:16.109150 | orchestrator | Friday 20 February 2026 03:46:00 +0000 (0:00:00.729) 0:00:45.045 ******* 2026-02-20 03:46:16.109161 | orchestrator | [WARNING]: Skipped 2026-02-20 03:46:16.109211 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-20 03:46:16.109224 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-20 03:46:16.109235 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-20 03:46:16.109246 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-20 03:46:16.109257 | orchestrator | [WARNING]: Skipped 2026-02-20 03:46:16.109279 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-20 03:46:16.109290 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-20 03:46:16.109301 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-20 03:46:16.109312 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-20 03:46:16.109323 | orchestrator | [WARNING]: Skipped 2026-02-20 03:46:16.109334 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-20 03:46:16.109345 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-20 03:46:16.109356 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-20 03:46:16.109367 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-20 03:46:16.109378 | orchestrator | [WARNING]: Skipped 2026-02-20 03:46:16.109389 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-20 03:46:16.109400 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-20 03:46:16.109416 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-20 03:46:16.109428 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-20 03:46:16.109438 | orchestrator | [WARNING]: Skipped 2026-02-20 03:46:16.109449 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-20 03:46:16.109460 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-20 03:46:16.109471 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-20 03:46:16.109482 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-20 03:46:16.109493 | orchestrator | [WARNING]: Skipped 2026-02-20 03:46:16.109504 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-20 03:46:16.109515 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-20 03:46:16.109526 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-20 03:46:16.109537 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-20 03:46:16.109548 | orchestrator | [WARNING]: Skipped 2026-02-20 03:46:16.109558 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-20 03:46:16.109569 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-20 03:46:16.109589 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-20 03:46:16.109600 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-20 03:46:16.109611 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 03:46:16.109622 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-20 03:46:16.109633 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 03:46:16.109644 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-20 03:46:16.109655 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-20 03:46:16.109666 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-20 03:46:16.109677 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-20 03:46:16.109688 | orchestrator | 2026-02-20 03:46:16.109699 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-20 03:46:16.109710 | orchestrator | Friday 20 February 2026 03:46:02 +0000 (0:00:01.743) 0:00:46.789 ******* 2026-02-20 03:46:16.109721 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-20 03:46:16.109738 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-20 03:46:16.109757 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:46:16.109788 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-20 03:46:16.109810 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:46:16.109831 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:46:16.109852 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-20 03:46:16.109872 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:46:16.109891 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-20 03:46:16.109903 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:46:16.109914 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-20 03:46:16.109925 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:46:16.109936 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-20 03:46:16.109946 | orchestrator | 2026-02-20 03:46:16.109957 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-20 03:46:16.109978 | orchestrator | Friday 20 February 2026 03:46:16 +0000 (0:00:13.554) 0:01:00.343 ******* 2026-02-20 03:46:31.423898 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-20 03:46:31.424009 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:46:31.424028 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-20 03:46:31.424038 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:46:31.424048 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-20 03:46:31.424058 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:46:31.424067 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-20 03:46:31.424076 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:46:31.424086 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-20 03:46:31.424096 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:46:31.424106 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-20 03:46:31.424117 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:46:31.424127 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-20 03:46:31.424138 | orchestrator | 2026-02-20 03:46:31.424149 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-20 03:46:31.424206 | orchestrator | Friday 20 February 2026 03:46:18 +0000 (0:00:02.827) 0:01:03.170 ******* 2026-02-20 03:46:31.424215 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-20 03:46:31.424222 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:46:31.424240 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-20 03:46:31.424246 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:46:31.424252 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-20 03:46:31.424258 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:46:31.424263 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-20 03:46:31.424269 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:46:31.424275 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-20 03:46:31.424281 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:46:31.424287 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-20 03:46:31.424293 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-20 03:46:31.424299 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:46:31.424304 | orchestrator | 2026-02-20 03:46:31.424311 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-20 03:46:31.424316 | orchestrator | Friday 20 February 2026 03:46:20 +0000 (0:00:01.409) 0:01:04.580 ******* 2026-02-20 03:46:31.424322 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 03:46:31.424328 | orchestrator | 2026-02-20 03:46:31.424334 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-20 03:46:31.424340 | orchestrator | Friday 20 February 2026 03:46:20 +0000 (0:00:00.611) 0:01:05.191 ******* 2026-02-20 03:46:31.424346 | orchestrator | skipping: [testbed-manager] 2026-02-20 03:46:31.424352 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:46:31.424358 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:46:31.424363 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:46:31.424369 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:46:31.424375 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:46:31.424380 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:46:31.424386 | orchestrator | 2026-02-20 03:46:31.424392 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-20 03:46:31.424437 | orchestrator | Friday 20 February 2026 03:46:21 +0000 (0:00:00.634) 0:01:05.826 ******* 2026-02-20 03:46:31.424444 | orchestrator | skipping: [testbed-manager] 2026-02-20 03:46:31.424451 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:46:31.424457 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:46:31.424464 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:46:31.424470 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:46:31.424477 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:46:31.424483 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:46:31.424489 | orchestrator | 2026-02-20 03:46:31.424496 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-20 03:46:31.424505 | orchestrator | Friday 20 February 2026 03:46:23 +0000 (0:00:01.862) 0:01:07.689 ******* 2026-02-20 03:46:31.424515 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-20 03:46:31.424524 | orchestrator | skipping: [testbed-manager] 2026-02-20 03:46:31.424533 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-20 03:46:31.424541 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:46:31.424558 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-20 03:46:31.424568 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-20 03:46:31.424598 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:46:31.424608 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:46:31.424617 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-20 03:46:31.424626 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:46:31.424636 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-20 03:46:31.424645 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:46:31.424655 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-20 03:46:31.424664 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:46:31.424674 | orchestrator | 2026-02-20 03:46:31.424685 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-20 03:46:31.424695 | orchestrator | Friday 20 February 2026 03:46:24 +0000 (0:00:01.378) 0:01:09.067 ******* 2026-02-20 03:46:31.424706 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-20 03:46:31.424716 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:46:31.424723 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-20 03:46:31.424732 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:46:31.424741 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-20 03:46:31.424757 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:46:31.424767 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-20 03:46:31.424776 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:46:31.424786 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-20 03:46:31.424795 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:46:31.424811 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-20 03:46:31.424819 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:46:31.424829 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-20 03:46:31.424838 | orchestrator | 2026-02-20 03:46:31.424847 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-20 03:46:31.424857 | orchestrator | Friday 20 February 2026 03:46:26 +0000 (0:00:01.408) 0:01:10.476 ******* 2026-02-20 03:46:31.424866 | orchestrator | [WARNING]: Skipped 2026-02-20 03:46:31.424877 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-20 03:46:31.424887 | orchestrator | due to this access issue: 2026-02-20 03:46:31.424896 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-20 03:46:31.424905 | orchestrator | not a directory 2026-02-20 03:46:31.424914 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 03:46:31.424924 | orchestrator | 2026-02-20 03:46:31.424933 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-20 03:46:31.424942 | orchestrator | Friday 20 February 2026 03:46:27 +0000 (0:00:01.095) 0:01:11.571 ******* 2026-02-20 03:46:31.424952 | orchestrator | skipping: [testbed-manager] 2026-02-20 03:46:31.424962 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:46:31.424971 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:46:31.424980 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:46:31.424991 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:46:31.424999 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:46:31.425017 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:46:31.425031 | orchestrator | 2026-02-20 03:46:31.425043 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-20 03:46:31.425051 | orchestrator | Friday 20 February 2026 03:46:28 +0000 (0:00:00.890) 0:01:12.462 ******* 2026-02-20 03:46:31.425060 | orchestrator | skipping: [testbed-manager] 2026-02-20 03:46:31.425068 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:46:31.425077 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:46:31.425085 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:46:31.425094 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:46:31.425103 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:46:31.425112 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:46:31.425120 | orchestrator | 2026-02-20 03:46:31.425128 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-20 03:46:31.425137 | orchestrator | Friday 20 February 2026 03:46:29 +0000 (0:00:00.852) 0:01:13.314 ******* 2026-02-20 03:46:31.425150 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-20 03:46:31.425233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:46:33.426489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:46:33.426638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:46:33.426664 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:46:33.426683 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:46:33.426732 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:46:33.426751 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-20 03:46:33.426772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:46:33.426814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:46:33.426833 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:46:33.426860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:46:33.426879 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:46:33.426907 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:46:33.426926 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:46:33.426945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:46:33.426965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:46:33.427002 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-20 03:47:13.254733 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-20 03:47:13.254944 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-20 03:47:13.254965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:47:13.254978 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-20 03:47:13.254991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:47:13.255016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:47:13.255028 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:47:13.255071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-20 03:47:13.255094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:47:13.255116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:47:13.255130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 03:47:13.255144 | orchestrator | 2026-02-20 03:47:13.255159 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-20 03:47:13.255213 | orchestrator | Friday 20 February 2026 03:46:33 +0000 (0:00:04.361) 0:01:17.676 ******* 2026-02-20 03:47:13.255229 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-20 03:47:13.255242 | orchestrator | skipping: [testbed-manager] 2026-02-20 03:47:13.255255 | orchestrator | 2026-02-20 03:47:13.255267 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-20 03:47:13.255280 | orchestrator | Friday 20 February 2026 03:46:34 +0000 (0:00:01.177) 0:01:18.854 ******* 2026-02-20 03:47:13.255292 | orchestrator | 2026-02-20 03:47:13.255304 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-20 03:47:13.255317 | orchestrator | Friday 20 February 2026 03:46:34 +0000 (0:00:00.218) 0:01:19.072 ******* 2026-02-20 03:47:13.255329 | orchestrator | 2026-02-20 03:47:13.255342 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-20 03:47:13.255354 | orchestrator | Friday 20 February 2026 03:46:34 +0000 (0:00:00.070) 0:01:19.143 ******* 2026-02-20 03:47:13.255367 | orchestrator | 2026-02-20 03:47:13.255379 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-20 03:47:13.255391 | orchestrator | Friday 20 February 2026 03:46:34 +0000 (0:00:00.068) 0:01:19.212 ******* 2026-02-20 03:47:13.255403 | orchestrator | 2026-02-20 03:47:13.255415 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-20 03:47:13.255428 | orchestrator | Friday 20 February 2026 03:46:35 +0000 (0:00:00.089) 0:01:19.301 ******* 2026-02-20 03:47:13.255439 | orchestrator | 2026-02-20 03:47:13.255449 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-20 03:47:13.255468 | orchestrator | Friday 20 February 2026 03:46:35 +0000 (0:00:00.069) 0:01:19.370 ******* 2026-02-20 03:47:13.255487 | orchestrator | 2026-02-20 03:47:13.255505 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-20 03:47:13.255524 | orchestrator | Friday 20 February 2026 03:46:35 +0000 (0:00:00.065) 0:01:19.436 ******* 2026-02-20 03:47:13.255543 | orchestrator | 2026-02-20 03:47:13.255563 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-20 03:47:13.255583 | orchestrator | Friday 20 February 2026 03:46:35 +0000 (0:00:00.100) 0:01:19.536 ******* 2026-02-20 03:47:13.255601 | orchestrator | changed: [testbed-manager] 2026-02-20 03:47:13.255624 | orchestrator | 2026-02-20 03:47:13.255636 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-20 03:47:13.255647 | orchestrator | Friday 20 February 2026 03:46:55 +0000 (0:00:20.705) 0:01:40.241 ******* 2026-02-20 03:47:13.255658 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:47:13.255669 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:47:13.255679 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:47:13.255690 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:47:13.255701 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:47:13.255712 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:47:13.255722 | orchestrator | changed: [testbed-manager] 2026-02-20 03:47:13.255733 | orchestrator | 2026-02-20 03:47:13.255744 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-20 03:47:13.255755 | orchestrator | Friday 20 February 2026 03:47:07 +0000 (0:00:11.685) 0:01:51.927 ******* 2026-02-20 03:47:13.255766 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:47:13.255777 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:47:13.255787 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:47:13.255798 | orchestrator | 2026-02-20 03:47:13.255817 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-20 03:48:17.468745 | orchestrator | Friday 20 February 2026 03:47:13 +0000 (0:00:05.569) 0:01:57.497 ******* 2026-02-20 03:48:17.468896 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:48:17.468926 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:48:17.468965 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:48:17.468978 | orchestrator | 2026-02-20 03:48:17.468990 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-20 03:48:17.469002 | orchestrator | Friday 20 February 2026 03:47:23 +0000 (0:00:10.331) 0:02:07.828 ******* 2026-02-20 03:48:17.469013 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:48:17.469024 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:48:17.469034 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:48:17.469045 | orchestrator | changed: [testbed-manager] 2026-02-20 03:48:17.469056 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:48:17.469066 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:48:17.469077 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:48:17.469088 | orchestrator | 2026-02-20 03:48:17.469099 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-20 03:48:17.469109 | orchestrator | Friday 20 February 2026 03:47:38 +0000 (0:00:14.779) 0:02:22.607 ******* 2026-02-20 03:48:17.469120 | orchestrator | changed: [testbed-manager] 2026-02-20 03:48:17.469131 | orchestrator | 2026-02-20 03:48:17.469142 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-20 03:48:17.469154 | orchestrator | Friday 20 February 2026 03:47:46 +0000 (0:00:07.810) 0:02:30.418 ******* 2026-02-20 03:48:17.469165 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:48:17.469176 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:48:17.469222 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:48:17.469233 | orchestrator | 2026-02-20 03:48:17.469243 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-20 03:48:17.469255 | orchestrator | Friday 20 February 2026 03:47:56 +0000 (0:00:10.731) 0:02:41.149 ******* 2026-02-20 03:48:17.469265 | orchestrator | changed: [testbed-manager] 2026-02-20 03:48:17.469276 | orchestrator | 2026-02-20 03:48:17.469287 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-20 03:48:17.469298 | orchestrator | Friday 20 February 2026 03:48:06 +0000 (0:00:09.970) 0:02:51.119 ******* 2026-02-20 03:48:17.469309 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:48:17.469320 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:48:17.469330 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:48:17.469341 | orchestrator | 2026-02-20 03:48:17.469352 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:48:17.469364 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-20 03:48:17.469405 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-20 03:48:17.469425 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-20 03:48:17.469443 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-20 03:48:17.469462 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-20 03:48:17.469480 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-20 03:48:17.469497 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-20 03:48:17.469517 | orchestrator | 2026-02-20 03:48:17.469535 | orchestrator | 2026-02-20 03:48:17.469555 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:48:17.469574 | orchestrator | Friday 20 February 2026 03:48:17 +0000 (0:00:10.148) 0:03:01.268 ******* 2026-02-20 03:48:17.469592 | orchestrator | =============================================================================== 2026-02-20 03:48:17.469604 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 21.32s 2026-02-20 03:48:17.469614 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 20.71s 2026-02-20 03:48:17.469625 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.78s 2026-02-20 03:48:17.469636 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.55s 2026-02-20 03:48:17.469646 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 11.69s 2026-02-20 03:48:17.469657 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.73s 2026-02-20 03:48:17.469668 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.33s 2026-02-20 03:48:17.469678 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.15s 2026-02-20 03:48:17.469689 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 9.97s 2026-02-20 03:48:17.469700 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.81s 2026-02-20 03:48:17.469710 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.98s 2026-02-20 03:48:17.469721 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.57s 2026-02-20 03:48:17.469732 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.25s 2026-02-20 03:48:17.469765 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.36s 2026-02-20 03:48:17.469776 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.83s 2026-02-20 03:48:17.469794 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.77s 2026-02-20 03:48:17.469805 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.17s 2026-02-20 03:48:17.469816 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.86s 2026-02-20 03:48:17.469827 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.74s 2026-02-20 03:48:17.469837 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.60s 2026-02-20 03:48:19.922936 | orchestrator | 2026-02-20 03:48:19 | INFO  | Task 9a1ea9ee-5ae9-4302-977d-aac028c08931 (grafana) was prepared for execution. 2026-02-20 03:48:19.923068 | orchestrator | 2026-02-20 03:48:19 | INFO  | It takes a moment until task 9a1ea9ee-5ae9-4302-977d-aac028c08931 (grafana) has been started and output is visible here. 2026-02-20 03:48:29.654428 | orchestrator | 2026-02-20 03:48:29.654552 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:48:29.654568 | orchestrator | 2026-02-20 03:48:29.654580 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:48:29.654592 | orchestrator | Friday 20 February 2026 03:48:24 +0000 (0:00:00.279) 0:00:00.279 ******* 2026-02-20 03:48:29.654603 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:48:29.654615 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:48:29.654626 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:48:29.654640 | orchestrator | 2026-02-20 03:48:29.654661 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:48:29.654679 | orchestrator | Friday 20 February 2026 03:48:24 +0000 (0:00:00.327) 0:00:00.607 ******* 2026-02-20 03:48:29.654697 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-20 03:48:29.654718 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-20 03:48:29.654737 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-20 03:48:29.654755 | orchestrator | 2026-02-20 03:48:29.654774 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-20 03:48:29.654794 | orchestrator | 2026-02-20 03:48:29.654813 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-20 03:48:29.654832 | orchestrator | Friday 20 February 2026 03:48:24 +0000 (0:00:00.429) 0:00:01.037 ******* 2026-02-20 03:48:29.654854 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:48:29.654876 | orchestrator | 2026-02-20 03:48:29.654896 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-20 03:48:29.654918 | orchestrator | Friday 20 February 2026 03:48:25 +0000 (0:00:00.579) 0:00:01.616 ******* 2026-02-20 03:48:29.654943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 03:48:29.654967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 03:48:29.654980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 03:48:29.655015 | orchestrator | 2026-02-20 03:48:29.655027 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-20 03:48:29.655053 | orchestrator | Friday 20 February 2026 03:48:26 +0000 (0:00:00.916) 0:00:02.533 ******* 2026-02-20 03:48:29.655064 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-20 03:48:29.655075 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-20 03:48:29.655086 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 03:48:29.655097 | orchestrator | 2026-02-20 03:48:29.655108 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-20 03:48:29.655119 | orchestrator | Friday 20 February 2026 03:48:27 +0000 (0:00:00.811) 0:00:03.344 ******* 2026-02-20 03:48:29.655130 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:48:29.655141 | orchestrator | 2026-02-20 03:48:29.655151 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-20 03:48:29.655162 | orchestrator | Friday 20 February 2026 03:48:27 +0000 (0:00:00.541) 0:00:03.886 ******* 2026-02-20 03:48:29.655223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 03:48:29.655237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 03:48:29.655248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 03:48:29.655260 | orchestrator | 2026-02-20 03:48:29.655270 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-20 03:48:29.655281 | orchestrator | Friday 20 February 2026 03:48:29 +0000 (0:00:01.276) 0:00:05.162 ******* 2026-02-20 03:48:29.655292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-20 03:48:29.655312 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:48:29.655329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-20 03:48:29.655341 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:48:29.655362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-20 03:48:36.746778 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:48:36.746852 | orchestrator | 2026-02-20 03:48:36.746859 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-20 03:48:36.746865 | orchestrator | Friday 20 February 2026 03:48:29 +0000 (0:00:00.583) 0:00:05.746 ******* 2026-02-20 03:48:36.746871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-20 03:48:36.746877 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:48:36.746881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-20 03:48:36.746885 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:48:36.746889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-20 03:48:36.746909 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:48:36.746913 | orchestrator | 2026-02-20 03:48:36.746917 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-20 03:48:36.746921 | orchestrator | Friday 20 February 2026 03:48:30 +0000 (0:00:00.630) 0:00:06.376 ******* 2026-02-20 03:48:36.746935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 03:48:36.746940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 03:48:36.746955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 03:48:36.746960 | orchestrator | 2026-02-20 03:48:36.746963 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-20 03:48:36.746967 | orchestrator | Friday 20 February 2026 03:48:31 +0000 (0:00:01.429) 0:00:07.806 ******* 2026-02-20 03:48:36.746971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 03:48:36.746975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 03:48:36.746983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 03:48:36.746987 | orchestrator | 2026-02-20 03:48:36.746991 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-20 03:48:36.746995 | orchestrator | Friday 20 February 2026 03:48:33 +0000 (0:00:01.607) 0:00:09.414 ******* 2026-02-20 03:48:36.746998 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:48:36.747002 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:48:36.747006 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:48:36.747010 | orchestrator | 2026-02-20 03:48:36.747013 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-20 03:48:36.747017 | orchestrator | Friday 20 February 2026 03:48:33 +0000 (0:00:00.327) 0:00:09.741 ******* 2026-02-20 03:48:36.747024 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-20 03:48:36.747029 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-20 03:48:36.747033 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-20 03:48:36.747037 | orchestrator | 2026-02-20 03:48:36.747040 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-20 03:48:36.747044 | orchestrator | Friday 20 February 2026 03:48:34 +0000 (0:00:01.304) 0:00:11.046 ******* 2026-02-20 03:48:36.747048 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-20 03:48:36.747053 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-20 03:48:36.747056 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-20 03:48:36.747060 | orchestrator | 2026-02-20 03:48:36.747064 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-20 03:48:36.747071 | orchestrator | Friday 20 February 2026 03:48:36 +0000 (0:00:01.785) 0:00:12.832 ******* 2026-02-20 03:48:43.162851 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 03:48:43.162970 | orchestrator | 2026-02-20 03:48:43.162986 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-20 03:48:43.163002 | orchestrator | Friday 20 February 2026 03:48:37 +0000 (0:00:00.739) 0:00:13.572 ******* 2026-02-20 03:48:43.163015 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-20 03:48:43.163029 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-20 03:48:43.163043 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:48:43.163057 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:48:43.163070 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:48:43.163082 | orchestrator | 2026-02-20 03:48:43.163096 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-20 03:48:43.163109 | orchestrator | Friday 20 February 2026 03:48:38 +0000 (0:00:00.719) 0:00:14.291 ******* 2026-02-20 03:48:43.163123 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:48:43.163158 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:48:43.163172 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:48:43.163270 | orchestrator | 2026-02-20 03:48:43.163284 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-20 03:48:43.163296 | orchestrator | Friday 20 February 2026 03:48:38 +0000 (0:00:00.365) 0:00:14.656 ******* 2026-02-20 03:48:43.163313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1330927, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7003553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:43.163330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1330927, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7003553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:43.163353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1330927, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7003553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:43.163385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1331312, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.797451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:43.163426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1331312, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.797451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:43.163445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1331312, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.797451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:43.163478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1330937, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7025795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:43.163496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1330937, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7025795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:43.163515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1330937, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7025795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:43.163539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1331316, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.799671, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:43.163557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1331316, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.799671, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:43.163581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1331316, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.799671, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:46.835002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1331035, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.741125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:46.835094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1331035, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.741125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:46.835106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1331035, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.741125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:46.835116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1331297, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7954698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:46.835141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1331297, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7954698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:46.835152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1331297, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7954698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:46.835269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1330925, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.6990826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:46.835283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1330925, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.6990826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:46.835293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1330925, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.6990826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:46.835303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1330932, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7013555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:46.835318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1330932, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7013555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:46.835329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1330932, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7013555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:46.835353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1330940, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.703859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:51.128552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1330940, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.703859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:51.128665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1331042, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7432408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:51.128679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1330940, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.703859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:51.128689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1331042, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7432408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:51.128714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1331308, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7967627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:51.128744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1331042, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7432408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:51.128773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1331308, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7967627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:51.128783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1330935, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7025795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:51.128792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1331308, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7967627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:51.128801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1330935, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7025795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:51.128815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1331051, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7936113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:51.128824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1331051, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7936113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:51.128848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1330935, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7025795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:54.760360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1331037, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.742417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:54.760445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1331037, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.742417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:54.760455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1331051, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7936113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:54.760479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1331030, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.740621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:54.760487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1331030, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.740621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:54.760510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1331037, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.742417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:54.760531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1331027, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7367275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:54.760538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1331027, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7367275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:54.760545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1331030, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.740621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:54.760551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1331045, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7455475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:54.760579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1331045, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7455475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:54.760592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1331027, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7367275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:54.760606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1330941, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7045672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:58.906537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1330941, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7045672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:58.906668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1331045, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7455475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:58.906696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1331303, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7965398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:58.906739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1331303, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7965398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:58.906789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1330941, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7045672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:58.906811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1331506, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8374534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:58.906856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1331506, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8374534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:58.906877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1331303, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.7965398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:58.906897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1331370, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8116853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:58.906927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1331370, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8116853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:58.906955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1331506, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8374534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:58.906967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1331343, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8035107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:48:58.906988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1331343, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8035107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:02.672606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1331370, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8116853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:02.672702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1331401, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8154068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:02.672714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1331401, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8154068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:02.672754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1331343, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8035107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:02.672762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1331328, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8006392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:02.672771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1331328, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8006392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:02.672793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1331401, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8154068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:02.672801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1331456, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8280907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:02.672807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1331456, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8280907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:02.672823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1331328, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8006392, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:02.672829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1331403, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8243854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:02.672835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1331403, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8243854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:02.672848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1331456, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8280907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:06.457474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1331463, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8291824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:06.457649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1331463, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8291824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:06.457706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1331497, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8340936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:06.457718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1331403, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8243854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:06.457728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1331497, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8340936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:06.457739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1331449, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8271158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:06.457764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1331463, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8291824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:06.457775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1331449, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8271158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:06.457795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1331394, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8136063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:06.457805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1331497, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8340936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:06.457814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1331394, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8136063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:06.457823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1331362, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.808077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:06.457840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1331449, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8271158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:10.730414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1331362, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.808077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:10.730549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1331389, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8123572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:10.730582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1331394, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8136063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:10.730596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1331389, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8123572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:10.730608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1331347, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8065922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:10.730619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1331362, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.808077, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:10.730649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1331347, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8065922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:10.730669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1331398, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8150935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:10.730686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1331389, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8123572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:10.730698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1331398, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8150935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:10.730710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1331482, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8331084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:10.730721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1331347, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8065922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:10.730774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1331482, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8331084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:14.552664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1331472, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.831257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:14.552836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1331398, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8150935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:14.552868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1331472, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.831257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:14.552888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1331332, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8010507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:14.552909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1331332, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8010507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:14.552930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1331482, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8331084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:14.553001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1331338, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8025422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:14.553022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1331338, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8025422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:14.553034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1331472, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.831257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:14.553048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1331443, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8259156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:14.553067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1331443, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8259156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:14.553097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1331332, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8010507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:49:14.553126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1331467, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.829655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:50:55.863542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1331467, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.829655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:50:55.863674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1331338, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8025422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:50:55.863690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1331443, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.8259156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:50:55.863700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1331467, 'dev': 91, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771552127.829655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-20 03:50:55.863731 | orchestrator | 2026-02-20 03:50:55.863743 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-20 03:50:55.863753 | orchestrator | Friday 20 February 2026 03:49:16 +0000 (0:00:38.358) 0:00:53.014 ******* 2026-02-20 03:50:55.863763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 03:50:55.863787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 03:50:55.863803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-20 03:50:55.863812 | orchestrator | 2026-02-20 03:50:55.863821 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-20 03:50:55.863830 | orchestrator | Friday 20 February 2026 03:49:17 +0000 (0:00:01.065) 0:00:54.080 ******* 2026-02-20 03:50:55.863839 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:50:55.863848 | orchestrator | 2026-02-20 03:50:55.863857 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-20 03:50:55.863866 | orchestrator | Friday 20 February 2026 03:49:20 +0000 (0:00:02.254) 0:00:56.335 ******* 2026-02-20 03:50:55.863875 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:50:55.863883 | orchestrator | 2026-02-20 03:50:55.863892 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-20 03:50:55.863901 | orchestrator | Friday 20 February 2026 03:49:22 +0000 (0:00:02.197) 0:00:58.532 ******* 2026-02-20 03:50:55.863909 | orchestrator | 2026-02-20 03:50:55.863918 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-20 03:50:55.863927 | orchestrator | Friday 20 February 2026 03:49:22 +0000 (0:00:00.070) 0:00:58.603 ******* 2026-02-20 03:50:55.863936 | orchestrator | 2026-02-20 03:50:55.863944 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-20 03:50:55.863953 | orchestrator | Friday 20 February 2026 03:49:22 +0000 (0:00:00.069) 0:00:58.672 ******* 2026-02-20 03:50:55.863962 | orchestrator | 2026-02-20 03:50:55.863970 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-20 03:50:55.863979 | orchestrator | Friday 20 February 2026 03:49:22 +0000 (0:00:00.070) 0:00:58.743 ******* 2026-02-20 03:50:55.863987 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:50:55.864004 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:50:55.864014 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:50:55.864024 | orchestrator | 2026-02-20 03:50:55.864034 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-20 03:50:55.864044 | orchestrator | Friday 20 February 2026 03:49:24 +0000 (0:00:02.101) 0:01:00.844 ******* 2026-02-20 03:50:55.864054 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:50:55.864064 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:50:55.864074 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-20 03:50:55.864085 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-20 03:50:55.864095 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-20 03:50:55.864105 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-02-20 03:50:55.864115 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:50:55.864126 | orchestrator | 2026-02-20 03:50:55.864136 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-20 03:50:55.864146 | orchestrator | Friday 20 February 2026 03:50:15 +0000 (0:00:50.256) 0:01:51.101 ******* 2026-02-20 03:50:55.864156 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:50:55.864166 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:50:55.864176 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:50:55.864210 | orchestrator | 2026-02-20 03:50:55.864221 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-20 03:50:55.864231 | orchestrator | Friday 20 February 2026 03:50:50 +0000 (0:00:35.828) 0:02:26.930 ******* 2026-02-20 03:50:55.864241 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:50:55.864252 | orchestrator | 2026-02-20 03:50:55.864267 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-20 03:50:55.864282 | orchestrator | Friday 20 February 2026 03:50:52 +0000 (0:00:02.089) 0:02:29.020 ******* 2026-02-20 03:50:55.864297 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:50:55.864320 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:50:55.864334 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:50:55.864349 | orchestrator | 2026-02-20 03:50:55.864363 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-20 03:50:55.864378 | orchestrator | Friday 20 February 2026 03:50:53 +0000 (0:00:00.303) 0:02:29.323 ******* 2026-02-20 03:50:55.864416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-20 03:50:55.864453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-20 03:50:56.452990 | orchestrator | 2026-02-20 03:50:56.453093 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-20 03:50:56.453110 | orchestrator | Friday 20 February 2026 03:50:55 +0000 (0:00:02.621) 0:02:31.945 ******* 2026-02-20 03:50:56.453122 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:50:56.453134 | orchestrator | 2026-02-20 03:50:56.453145 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:50:56.453177 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-20 03:50:56.453305 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-20 03:50:56.453343 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-20 03:50:56.453354 | orchestrator | 2026-02-20 03:50:56.453365 | orchestrator | 2026-02-20 03:50:56.453377 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:50:56.453388 | orchestrator | Friday 20 February 2026 03:50:56 +0000 (0:00:00.273) 0:02:32.218 ******* 2026-02-20 03:50:56.453399 | orchestrator | =============================================================================== 2026-02-20 03:50:56.453410 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.26s 2026-02-20 03:50:56.453421 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.36s 2026-02-20 03:50:56.453432 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 35.83s 2026-02-20 03:50:56.453443 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.62s 2026-02-20 03:50:56.453453 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.25s 2026-02-20 03:50:56.453464 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.20s 2026-02-20 03:50:56.453475 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.10s 2026-02-20 03:50:56.453486 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.09s 2026-02-20 03:50:56.453497 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.79s 2026-02-20 03:50:56.453507 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.61s 2026-02-20 03:50:56.453518 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.43s 2026-02-20 03:50:56.453529 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.30s 2026-02-20 03:50:56.453542 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.28s 2026-02-20 03:50:56.453554 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.07s 2026-02-20 03:50:56.453567 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.92s 2026-02-20 03:50:56.453579 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.81s 2026-02-20 03:50:56.453591 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.74s 2026-02-20 03:50:56.453604 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.72s 2026-02-20 03:50:56.453617 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.63s 2026-02-20 03:50:56.453630 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.58s 2026-02-20 03:50:56.739435 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-02-20 03:50:56.745214 | orchestrator | + set -e 2026-02-20 03:50:56.745328 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-20 03:50:56.745353 | orchestrator | ++ export INTERACTIVE=false 2026-02-20 03:50:56.745373 | orchestrator | ++ INTERACTIVE=false 2026-02-20 03:50:56.745390 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-20 03:50:56.745407 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-20 03:50:56.745426 | orchestrator | + source /opt/manager-vars.sh 2026-02-20 03:50:56.745445 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-20 03:50:56.745465 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-20 03:50:56.745500 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-20 03:50:56.745512 | orchestrator | ++ CEPH_VERSION=reef 2026-02-20 03:50:56.745523 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-20 03:50:56.745534 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-20 03:50:56.745545 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-20 03:50:56.745556 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-20 03:50:56.745567 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-20 03:50:56.745578 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-20 03:50:56.745589 | orchestrator | ++ export ARA=false 2026-02-20 03:50:56.745600 | orchestrator | ++ ARA=false 2026-02-20 03:50:56.745611 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-20 03:50:56.745622 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-20 03:50:56.745661 | orchestrator | ++ export TEMPEST=false 2026-02-20 03:50:56.745672 | orchestrator | ++ TEMPEST=false 2026-02-20 03:50:56.745683 | orchestrator | ++ export IS_ZUUL=true 2026-02-20 03:50:56.745694 | orchestrator | ++ IS_ZUUL=true 2026-02-20 03:50:56.745705 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 03:50:56.745716 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 03:50:56.745727 | orchestrator | ++ export EXTERNAL_API=false 2026-02-20 03:50:56.745738 | orchestrator | ++ EXTERNAL_API=false 2026-02-20 03:50:56.745749 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-20 03:50:56.745760 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-20 03:50:56.745772 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-20 03:50:56.745785 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-20 03:50:56.745797 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-20 03:50:56.745811 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-20 03:50:56.746014 | orchestrator | ++ semver 9.5.0 8.0.0 2026-02-20 03:50:56.804098 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-20 03:50:56.804760 | orchestrator | + osism apply clusterapi 2026-02-20 03:50:58.785039 | orchestrator | 2026-02-20 03:50:58 | INFO  | Task 5c198491-5007-4103-982e-2a54c9f7072d (clusterapi) was prepared for execution. 2026-02-20 03:50:58.785124 | orchestrator | 2026-02-20 03:50:58 | INFO  | It takes a moment until task 5c198491-5007-4103-982e-2a54c9f7072d (clusterapi) has been started and output is visible here. 2026-02-20 03:52:04.049659 | orchestrator | 2026-02-20 03:52:04.049762 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-02-20 03:52:04.049775 | orchestrator | 2026-02-20 03:52:04.049787 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-02-20 03:52:04.049798 | orchestrator | Friday 20 February 2026 03:51:02 +0000 (0:00:00.183) 0:00:00.183 ******* 2026-02-20 03:52:04.049808 | orchestrator | included: cert_manager for testbed-manager 2026-02-20 03:52:04.049818 | orchestrator | 2026-02-20 03:52:04.049828 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-02-20 03:52:04.049838 | orchestrator | Friday 20 February 2026 03:51:03 +0000 (0:00:00.247) 0:00:00.430 ******* 2026-02-20 03:52:04.049867 | orchestrator | changed: [testbed-manager] 2026-02-20 03:52:04.049876 | orchestrator | 2026-02-20 03:52:04.049882 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-02-20 03:52:04.049889 | orchestrator | Friday 20 February 2026 03:51:08 +0000 (0:00:05.179) 0:00:05.610 ******* 2026-02-20 03:52:04.049896 | orchestrator | changed: [testbed-manager] 2026-02-20 03:52:04.049902 | orchestrator | 2026-02-20 03:52:04.049908 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-02-20 03:52:04.049915 | orchestrator | 2026-02-20 03:52:04.049921 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-02-20 03:52:04.049927 | orchestrator | Friday 20 February 2026 03:51:42 +0000 (0:00:34.550) 0:00:40.160 ******* 2026-02-20 03:52:04.049933 | orchestrator | ok: [testbed-manager] 2026-02-20 03:52:04.049940 | orchestrator | 2026-02-20 03:52:04.049947 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-02-20 03:52:04.049956 | orchestrator | Friday 20 February 2026 03:51:44 +0000 (0:00:01.114) 0:00:41.274 ******* 2026-02-20 03:52:04.049966 | orchestrator | ok: [testbed-manager] 2026-02-20 03:52:04.049977 | orchestrator | 2026-02-20 03:52:04.049984 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-02-20 03:52:04.049990 | orchestrator | Friday 20 February 2026 03:51:44 +0000 (0:00:00.168) 0:00:41.443 ******* 2026-02-20 03:52:04.049997 | orchestrator | ok: [testbed-manager] 2026-02-20 03:52:04.050003 | orchestrator | 2026-02-20 03:52:04.050009 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-02-20 03:52:04.050062 | orchestrator | Friday 20 February 2026 03:52:01 +0000 (0:00:17.181) 0:00:58.625 ******* 2026-02-20 03:52:04.050073 | orchestrator | skipping: [testbed-manager] 2026-02-20 03:52:04.050079 | orchestrator | 2026-02-20 03:52:04.050086 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-02-20 03:52:04.050092 | orchestrator | Friday 20 February 2026 03:52:01 +0000 (0:00:00.134) 0:00:58.760 ******* 2026-02-20 03:52:04.050117 | orchestrator | changed: [testbed-manager] 2026-02-20 03:52:04.050124 | orchestrator | 2026-02-20 03:52:04.050130 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:52:04.050137 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 03:52:04.050144 | orchestrator | 2026-02-20 03:52:04.050151 | orchestrator | 2026-02-20 03:52:04.050157 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:52:04.050163 | orchestrator | Friday 20 February 2026 03:52:03 +0000 (0:00:02.130) 0:01:00.890 ******* 2026-02-20 03:52:04.050169 | orchestrator | =============================================================================== 2026-02-20 03:52:04.050175 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 34.55s 2026-02-20 03:52:04.050181 | orchestrator | Initialize the CAPI management cluster --------------------------------- 17.18s 2026-02-20 03:52:04.050252 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.18s 2026-02-20 03:52:04.050259 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.13s 2026-02-20 03:52:04.050267 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.11s 2026-02-20 03:52:04.050274 | orchestrator | Include cert_manager role ----------------------------------------------- 0.25s 2026-02-20 03:52:04.050282 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.17s 2026-02-20 03:52:04.050298 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.13s 2026-02-20 03:52:04.312346 | orchestrator | + osism apply magnum 2026-02-20 03:52:06.349434 | orchestrator | 2026-02-20 03:52:06 | INFO  | Task c1403529-60d7-4557-94bc-a9dad6f19aa5 (magnum) was prepared for execution. 2026-02-20 03:52:06.349528 | orchestrator | 2026-02-20 03:52:06 | INFO  | It takes a moment until task c1403529-60d7-4557-94bc-a9dad6f19aa5 (magnum) has been started and output is visible here. 2026-02-20 03:52:48.338574 | orchestrator | 2026-02-20 03:52:48.338705 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 03:52:48.338722 | orchestrator | 2026-02-20 03:52:48.338735 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 03:52:48.338746 | orchestrator | Friday 20 February 2026 03:52:10 +0000 (0:00:00.274) 0:00:00.274 ******* 2026-02-20 03:52:48.338757 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:52:48.338770 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:52:48.338781 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:52:48.338806 | orchestrator | 2026-02-20 03:52:48.338818 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 03:52:48.338829 | orchestrator | Friday 20 February 2026 03:52:10 +0000 (0:00:00.319) 0:00:00.593 ******* 2026-02-20 03:52:48.338840 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-20 03:52:48.338852 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-20 03:52:48.338863 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-20 03:52:48.338874 | orchestrator | 2026-02-20 03:52:48.338885 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-20 03:52:48.338897 | orchestrator | 2026-02-20 03:52:48.338908 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-20 03:52:48.338919 | orchestrator | Friday 20 February 2026 03:52:11 +0000 (0:00:00.438) 0:00:01.032 ******* 2026-02-20 03:52:48.338935 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:52:48.338955 | orchestrator | 2026-02-20 03:52:48.338974 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-20 03:52:48.338992 | orchestrator | Friday 20 February 2026 03:52:11 +0000 (0:00:00.557) 0:00:01.589 ******* 2026-02-20 03:52:48.339011 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-20 03:52:48.339031 | orchestrator | 2026-02-20 03:52:48.339104 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-20 03:52:48.339129 | orchestrator | Friday 20 February 2026 03:52:15 +0000 (0:00:03.521) 0:00:05.111 ******* 2026-02-20 03:52:48.339151 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-20 03:52:48.339173 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-20 03:52:48.339228 | orchestrator | 2026-02-20 03:52:48.339248 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-20 03:52:48.339267 | orchestrator | Friday 20 February 2026 03:52:21 +0000 (0:00:06.528) 0:00:11.640 ******* 2026-02-20 03:52:48.339285 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-20 03:52:48.339303 | orchestrator | 2026-02-20 03:52:48.339320 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-20 03:52:48.339336 | orchestrator | Friday 20 February 2026 03:52:25 +0000 (0:00:03.407) 0:00:15.047 ******* 2026-02-20 03:52:48.339353 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-20 03:52:48.339371 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-20 03:52:48.339389 | orchestrator | 2026-02-20 03:52:48.339407 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-20 03:52:48.339424 | orchestrator | Friday 20 February 2026 03:52:29 +0000 (0:00:03.921) 0:00:18.969 ******* 2026-02-20 03:52:48.339443 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-20 03:52:48.339461 | orchestrator | 2026-02-20 03:52:48.339479 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-20 03:52:48.339497 | orchestrator | Friday 20 February 2026 03:52:32 +0000 (0:00:03.230) 0:00:22.200 ******* 2026-02-20 03:52:48.339514 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-20 03:52:48.339531 | orchestrator | 2026-02-20 03:52:48.339549 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-20 03:52:48.339568 | orchestrator | Friday 20 February 2026 03:52:36 +0000 (0:00:03.753) 0:00:25.953 ******* 2026-02-20 03:52:48.339586 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:52:48.339606 | orchestrator | 2026-02-20 03:52:48.339624 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-20 03:52:48.339643 | orchestrator | Friday 20 February 2026 03:52:39 +0000 (0:00:03.305) 0:00:29.258 ******* 2026-02-20 03:52:48.339662 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:52:48.339681 | orchestrator | 2026-02-20 03:52:48.339699 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-20 03:52:48.339717 | orchestrator | Friday 20 February 2026 03:52:43 +0000 (0:00:03.884) 0:00:33.143 ******* 2026-02-20 03:52:48.339735 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:52:48.339754 | orchestrator | 2026-02-20 03:52:48.339773 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-20 03:52:48.339792 | orchestrator | Friday 20 February 2026 03:52:46 +0000 (0:00:03.415) 0:00:36.558 ******* 2026-02-20 03:52:48.339839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:52:48.339872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:52:48.339893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:52:48.339905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:52:48.339918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:52:48.339938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:52:55.582757 | orchestrator | 2026-02-20 03:52:55.582899 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-20 03:52:55.582927 | orchestrator | Friday 20 February 2026 03:52:48 +0000 (0:00:01.569) 0:00:38.127 ******* 2026-02-20 03:52:55.582946 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:52:55.582965 | orchestrator | 2026-02-20 03:52:55.582984 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-20 03:52:55.583004 | orchestrator | Friday 20 February 2026 03:52:48 +0000 (0:00:00.137) 0:00:38.265 ******* 2026-02-20 03:52:55.583022 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:52:55.583039 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:52:55.583057 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:52:55.583074 | orchestrator | 2026-02-20 03:52:55.583092 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-20 03:52:55.583110 | orchestrator | Friday 20 February 2026 03:52:48 +0000 (0:00:00.291) 0:00:38.556 ******* 2026-02-20 03:52:55.583127 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 03:52:55.583146 | orchestrator | 2026-02-20 03:52:55.583164 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-20 03:52:55.583285 | orchestrator | Friday 20 February 2026 03:52:49 +0000 (0:00:00.856) 0:00:39.413 ******* 2026-02-20 03:52:55.583340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:52:55.583370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:52:55.583394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:52:55.583475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:52:55.583500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:52:55.583528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:52:55.583547 | orchestrator | 2026-02-20 03:52:55.583566 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-20 03:52:55.583583 | orchestrator | Friday 20 February 2026 03:52:52 +0000 (0:00:02.402) 0:00:41.816 ******* 2026-02-20 03:52:55.583600 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:52:55.583621 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:52:55.583639 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:52:55.583658 | orchestrator | 2026-02-20 03:52:55.583675 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-20 03:52:55.583691 | orchestrator | Friday 20 February 2026 03:52:52 +0000 (0:00:00.450) 0:00:42.266 ******* 2026-02-20 03:52:55.583708 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 03:52:55.583722 | orchestrator | 2026-02-20 03:52:55.583738 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-20 03:52:55.583753 | orchestrator | Friday 20 February 2026 03:52:53 +0000 (0:00:00.583) 0:00:42.850 ******* 2026-02-20 03:52:55.583770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:52:55.583814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:52:56.462774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:52:56.462925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:52:56.462958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:52:56.462980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:52:56.463016 | orchestrator | 2026-02-20 03:52:56.463030 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-20 03:52:56.463042 | orchestrator | Friday 20 February 2026 03:52:55 +0000 (0:00:02.535) 0:00:45.385 ******* 2026-02-20 03:52:56.463075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-20 03:52:56.463088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:52:56.463100 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:52:56.463119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-20 03:52:56.463132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:52:56.463150 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:52:56.463161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-20 03:52:56.463213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:52:59.881384 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:52:59.881494 | orchestrator | 2026-02-20 03:52:59.881509 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-20 03:52:59.881522 | orchestrator | Friday 20 February 2026 03:52:56 +0000 (0:00:00.877) 0:00:46.263 ******* 2026-02-20 03:52:59.881552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-20 03:52:59.881570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:52:59.881606 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:52:59.881619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-20 03:52:59.881631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:52:59.881642 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:52:59.881670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-20 03:52:59.881688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:52:59.881700 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:52:59.881711 | orchestrator | 2026-02-20 03:52:59.881723 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-20 03:52:59.881734 | orchestrator | Friday 20 February 2026 03:52:57 +0000 (0:00:00.835) 0:00:47.099 ******* 2026-02-20 03:52:59.881746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:52:59.881784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:52:59.881805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:53:05.815278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:53:05.815507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:53:05.815572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:53:05.815594 | orchestrator | 2026-02-20 03:53:05.815608 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-20 03:53:05.815620 | orchestrator | Friday 20 February 2026 03:52:59 +0000 (0:00:02.584) 0:00:49.684 ******* 2026-02-20 03:53:05.815633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:53:05.815666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:53:05.815687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:53:05.815699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:53:05.815718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:53:05.815734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:53:05.815753 | orchestrator | 2026-02-20 03:53:05.815772 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-20 03:53:05.815789 | orchestrator | Friday 20 February 2026 03:53:05 +0000 (0:00:05.257) 0:00:54.941 ******* 2026-02-20 03:53:05.815821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-20 03:53:07.792484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:53:07.792611 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:53:07.792631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-20 03:53:07.792646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:53:07.792657 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:53:07.792669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-20 03:53:07.792699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 03:53:07.792712 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:53:07.792723 | orchestrator | 2026-02-20 03:53:07.792735 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-20 03:53:07.792748 | orchestrator | Friday 20 February 2026 03:53:05 +0000 (0:00:00.677) 0:00:55.619 ******* 2026-02-20 03:53:07.792767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:53:07.792788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:53:07.792800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-20 03:53:07.792811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:53:07.792837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:54:06.043732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-20 03:54:06.043849 | orchestrator | 2026-02-20 03:54:06.043865 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-20 03:54:06.043877 | orchestrator | Friday 20 February 2026 03:53:07 +0000 (0:00:01.970) 0:00:57.589 ******* 2026-02-20 03:54:06.043887 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:54:06.043898 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:54:06.043908 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:54:06.043918 | orchestrator | 2026-02-20 03:54:06.043928 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-20 03:54:06.043938 | orchestrator | Friday 20 February 2026 03:53:08 +0000 (0:00:00.503) 0:00:58.093 ******* 2026-02-20 03:54:06.043948 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:54:06.043963 | orchestrator | 2026-02-20 03:54:06.043981 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-20 03:54:06.044002 | orchestrator | Friday 20 February 2026 03:53:10 +0000 (0:00:02.117) 0:01:00.210 ******* 2026-02-20 03:54:06.044027 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:54:06.044045 | orchestrator | 2026-02-20 03:54:06.044062 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-20 03:54:06.044080 | orchestrator | Friday 20 February 2026 03:53:12 +0000 (0:00:02.258) 0:01:02.468 ******* 2026-02-20 03:54:06.044099 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:54:06.044116 | orchestrator | 2026-02-20 03:54:06.044131 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-20 03:54:06.044141 | orchestrator | Friday 20 February 2026 03:53:28 +0000 (0:00:16.150) 0:01:18.618 ******* 2026-02-20 03:54:06.044151 | orchestrator | 2026-02-20 03:54:06.044161 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-20 03:54:06.044171 | orchestrator | Friday 20 February 2026 03:53:28 +0000 (0:00:00.073) 0:01:18.692 ******* 2026-02-20 03:54:06.044207 | orchestrator | 2026-02-20 03:54:06.044218 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-20 03:54:06.044228 | orchestrator | Friday 20 February 2026 03:53:28 +0000 (0:00:00.072) 0:01:18.764 ******* 2026-02-20 03:54:06.044237 | orchestrator | 2026-02-20 03:54:06.044247 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-20 03:54:06.044257 | orchestrator | Friday 20 February 2026 03:53:29 +0000 (0:00:00.073) 0:01:18.838 ******* 2026-02-20 03:54:06.044269 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:54:06.044284 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:54:06.044306 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:54:06.044328 | orchestrator | 2026-02-20 03:54:06.044344 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-20 03:54:06.044361 | orchestrator | Friday 20 February 2026 03:53:49 +0000 (0:00:20.854) 0:01:39.692 ******* 2026-02-20 03:54:06.044379 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:54:06.044396 | orchestrator | changed: [testbed-node-2] 2026-02-20 03:54:06.044414 | orchestrator | changed: [testbed-node-1] 2026-02-20 03:54:06.044452 | orchestrator | 2026-02-20 03:54:06.044464 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:54:06.044478 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 03:54:06.044490 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-20 03:54:06.044502 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-20 03:54:06.044513 | orchestrator | 2026-02-20 03:54:06.044524 | orchestrator | 2026-02-20 03:54:06.044536 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:54:06.044548 | orchestrator | Friday 20 February 2026 03:54:05 +0000 (0:00:15.850) 0:01:55.542 ******* 2026-02-20 03:54:06.044559 | orchestrator | =============================================================================== 2026-02-20 03:54:06.044570 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.85s 2026-02-20 03:54:06.044581 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.15s 2026-02-20 03:54:06.044593 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.85s 2026-02-20 03:54:06.044610 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.53s 2026-02-20 03:54:06.044634 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.26s 2026-02-20 03:54:06.044652 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.92s 2026-02-20 03:54:06.044668 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.88s 2026-02-20 03:54:06.044723 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.75s 2026-02-20 03:54:06.044742 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.52s 2026-02-20 03:54:06.044758 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.42s 2026-02-20 03:54:06.044844 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.41s 2026-02-20 03:54:06.044866 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.31s 2026-02-20 03:54:06.044883 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.23s 2026-02-20 03:54:06.044899 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.58s 2026-02-20 03:54:06.044915 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.54s 2026-02-20 03:54:06.044932 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.40s 2026-02-20 03:54:06.044949 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.26s 2026-02-20 03:54:06.044966 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.12s 2026-02-20 03:54:06.044982 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.97s 2026-02-20 03:54:06.045000 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.57s 2026-02-20 03:54:06.639972 | orchestrator | ok: Runtime: 1:40:33.441947 2026-02-20 03:54:06.874595 | 2026-02-20 03:54:06.874733 | TASK [Deploy in a nutshell] 2026-02-20 03:54:07.407823 | orchestrator | skipping: Conditional result was False 2026-02-20 03:54:07.430497 | 2026-02-20 03:54:07.430648 | TASK [Bootstrap services] 2026-02-20 03:54:08.152292 | orchestrator | 2026-02-20 03:54:08.152475 | orchestrator | # BOOTSTRAP 2026-02-20 03:54:08.152496 | orchestrator | 2026-02-20 03:54:08.152509 | orchestrator | + set -e 2026-02-20 03:54:08.152521 | orchestrator | + echo 2026-02-20 03:54:08.152532 | orchestrator | + echo '# BOOTSTRAP' 2026-02-20 03:54:08.152547 | orchestrator | + echo 2026-02-20 03:54:08.152585 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-20 03:54:08.161281 | orchestrator | + set -e 2026-02-20 03:54:08.161382 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-20 03:54:10.207059 | orchestrator | 2026-02-20 03:54:10 | INFO  | It takes a moment until task 5dabf132-7a06-42de-af53-84ce69e57ccd (flavor-manager) has been started and output is visible here. 2026-02-20 03:54:17.696820 | orchestrator | 2026-02-20 03:54:13 | INFO  | Flavor SCS-1L-1 created 2026-02-20 03:54:17.696984 | orchestrator | 2026-02-20 03:54:13 | INFO  | Flavor SCS-1L-1-5 created 2026-02-20 03:54:17.697007 | orchestrator | 2026-02-20 03:54:13 | INFO  | Flavor SCS-1V-2 created 2026-02-20 03:54:17.697020 | orchestrator | 2026-02-20 03:54:13 | INFO  | Flavor SCS-1V-2-5 created 2026-02-20 03:54:17.697032 | orchestrator | 2026-02-20 03:54:13 | INFO  | Flavor SCS-1V-4 created 2026-02-20 03:54:17.697044 | orchestrator | 2026-02-20 03:54:14 | INFO  | Flavor SCS-1V-4-10 created 2026-02-20 03:54:17.697055 | orchestrator | 2026-02-20 03:54:14 | INFO  | Flavor SCS-1V-8 created 2026-02-20 03:54:17.697068 | orchestrator | 2026-02-20 03:54:14 | INFO  | Flavor SCS-1V-8-20 created 2026-02-20 03:54:17.697095 | orchestrator | 2026-02-20 03:54:14 | INFO  | Flavor SCS-2V-4 created 2026-02-20 03:54:17.697107 | orchestrator | 2026-02-20 03:54:14 | INFO  | Flavor SCS-2V-4-10 created 2026-02-20 03:54:17.697119 | orchestrator | 2026-02-20 03:54:14 | INFO  | Flavor SCS-2V-8 created 2026-02-20 03:54:17.697130 | orchestrator | 2026-02-20 03:54:14 | INFO  | Flavor SCS-2V-8-20 created 2026-02-20 03:54:17.697141 | orchestrator | 2026-02-20 03:54:15 | INFO  | Flavor SCS-2V-16 created 2026-02-20 03:54:17.697152 | orchestrator | 2026-02-20 03:54:15 | INFO  | Flavor SCS-2V-16-50 created 2026-02-20 03:54:17.697164 | orchestrator | 2026-02-20 03:54:15 | INFO  | Flavor SCS-4V-8 created 2026-02-20 03:54:17.697279 | orchestrator | 2026-02-20 03:54:15 | INFO  | Flavor SCS-4V-8-20 created 2026-02-20 03:54:17.697293 | orchestrator | 2026-02-20 03:54:15 | INFO  | Flavor SCS-4V-16 created 2026-02-20 03:54:17.697313 | orchestrator | 2026-02-20 03:54:15 | INFO  | Flavor SCS-4V-16-50 created 2026-02-20 03:54:17.697333 | orchestrator | 2026-02-20 03:54:16 | INFO  | Flavor SCS-4V-32 created 2026-02-20 03:54:17.697352 | orchestrator | 2026-02-20 03:54:16 | INFO  | Flavor SCS-4V-32-100 created 2026-02-20 03:54:17.697370 | orchestrator | 2026-02-20 03:54:16 | INFO  | Flavor SCS-8V-16 created 2026-02-20 03:54:17.697390 | orchestrator | 2026-02-20 03:54:16 | INFO  | Flavor SCS-8V-16-50 created 2026-02-20 03:54:17.697408 | orchestrator | 2026-02-20 03:54:16 | INFO  | Flavor SCS-8V-32 created 2026-02-20 03:54:17.697428 | orchestrator | 2026-02-20 03:54:16 | INFO  | Flavor SCS-8V-32-100 created 2026-02-20 03:54:17.697445 | orchestrator | 2026-02-20 03:54:16 | INFO  | Flavor SCS-16V-32 created 2026-02-20 03:54:17.697465 | orchestrator | 2026-02-20 03:54:17 | INFO  | Flavor SCS-16V-32-100 created 2026-02-20 03:54:17.697485 | orchestrator | 2026-02-20 03:54:17 | INFO  | Flavor SCS-2V-4-20s created 2026-02-20 03:54:17.697506 | orchestrator | 2026-02-20 03:54:17 | INFO  | Flavor SCS-4V-8-50s created 2026-02-20 03:54:17.697525 | orchestrator | 2026-02-20 03:54:17 | INFO  | Flavor SCS-8V-32-100s created 2026-02-20 03:54:19.934948 | orchestrator | 2026-02-20 03:54:19 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-20 03:54:30.117418 | orchestrator | 2026-02-20 03:54:30 | INFO  | Task 96dbedf0-d17d-43fd-b6c6-6e2da85095b9 (bootstrap-basic) was prepared for execution. 2026-02-20 03:54:30.117514 | orchestrator | 2026-02-20 03:54:30 | INFO  | It takes a moment until task 96dbedf0-d17d-43fd-b6c6-6e2da85095b9 (bootstrap-basic) has been started and output is visible here. 2026-02-20 03:55:10.953537 | orchestrator | 2026-02-20 03:55:10.953657 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-20 03:55:10.953674 | orchestrator | 2026-02-20 03:55:10.953687 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-20 03:55:10.953698 | orchestrator | Friday 20 February 2026 03:54:34 +0000 (0:00:00.072) 0:00:00.072 ******* 2026-02-20 03:55:10.953709 | orchestrator | ok: [localhost] 2026-02-20 03:55:10.953722 | orchestrator | 2026-02-20 03:55:10.953733 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-20 03:55:10.953744 | orchestrator | Friday 20 February 2026 03:54:36 +0000 (0:00:01.754) 0:00:01.826 ******* 2026-02-20 03:55:10.953755 | orchestrator | ok: [localhost] 2026-02-20 03:55:10.953766 | orchestrator | 2026-02-20 03:55:10.953778 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-20 03:55:10.953788 | orchestrator | Friday 20 February 2026 03:54:42 +0000 (0:00:06.566) 0:00:08.393 ******* 2026-02-20 03:55:10.953799 | orchestrator | changed: [localhost] 2026-02-20 03:55:10.953811 | orchestrator | 2026-02-20 03:55:10.953822 | orchestrator | TASK [Create public network] *************************************************** 2026-02-20 03:55:10.953833 | orchestrator | Friday 20 February 2026 03:54:48 +0000 (0:00:05.875) 0:00:14.268 ******* 2026-02-20 03:55:10.953844 | orchestrator | changed: [localhost] 2026-02-20 03:55:10.953855 | orchestrator | 2026-02-20 03:55:10.953866 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-20 03:55:10.953877 | orchestrator | Friday 20 February 2026 03:54:53 +0000 (0:00:04.723) 0:00:18.992 ******* 2026-02-20 03:55:10.953892 | orchestrator | changed: [localhost] 2026-02-20 03:55:10.953904 | orchestrator | 2026-02-20 03:55:10.953915 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-20 03:55:10.953926 | orchestrator | Friday 20 February 2026 03:54:59 +0000 (0:00:06.055) 0:00:25.047 ******* 2026-02-20 03:55:10.953937 | orchestrator | changed: [localhost] 2026-02-20 03:55:10.953948 | orchestrator | 2026-02-20 03:55:10.953959 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-20 03:55:10.953970 | orchestrator | Friday 20 February 2026 03:55:03 +0000 (0:00:04.333) 0:00:29.381 ******* 2026-02-20 03:55:10.953981 | orchestrator | changed: [localhost] 2026-02-20 03:55:10.953991 | orchestrator | 2026-02-20 03:55:10.954002 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-20 03:55:10.954085 | orchestrator | Friday 20 February 2026 03:55:07 +0000 (0:00:03.669) 0:00:33.051 ******* 2026-02-20 03:55:10.954102 | orchestrator | ok: [localhost] 2026-02-20 03:55:10.954115 | orchestrator | 2026-02-20 03:55:10.954128 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:55:10.954141 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 03:55:10.954155 | orchestrator | 2026-02-20 03:55:10.954187 | orchestrator | 2026-02-20 03:55:10.954200 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:55:10.954212 | orchestrator | Friday 20 February 2026 03:55:10 +0000 (0:00:03.392) 0:00:36.444 ******* 2026-02-20 03:55:10.954225 | orchestrator | =============================================================================== 2026-02-20 03:55:10.954238 | orchestrator | Get volume type LUKS ---------------------------------------------------- 6.57s 2026-02-20 03:55:10.954251 | orchestrator | Set public network to default ------------------------------------------- 6.06s 2026-02-20 03:55:10.954263 | orchestrator | Create volume type LUKS ------------------------------------------------- 5.88s 2026-02-20 03:55:10.954277 | orchestrator | Create public network --------------------------------------------------- 4.72s 2026-02-20 03:55:10.954309 | orchestrator | Create public subnet ---------------------------------------------------- 4.33s 2026-02-20 03:55:10.954322 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.67s 2026-02-20 03:55:10.954335 | orchestrator | Create manager role ----------------------------------------------------- 3.39s 2026-02-20 03:55:10.954348 | orchestrator | Gathering Facts --------------------------------------------------------- 1.75s 2026-02-20 03:55:13.241607 | orchestrator | 2026-02-20 03:55:13 | INFO  | It takes a moment until task 97b0a0b7-54a3-4061-8429-75e83926409c (image-manager) has been started and output is visible here. 2026-02-20 03:55:57.556313 | orchestrator | 2026-02-20 03:55:16 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-20 03:55:57.556433 | orchestrator | 2026-02-20 03:55:16 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-20 03:55:57.556458 | orchestrator | 2026-02-20 03:55:16 | INFO  | Importing image Cirros 0.6.2 2026-02-20 03:55:57.556477 | orchestrator | 2026-02-20 03:55:16 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-20 03:55:57.556494 | orchestrator | 2026-02-20 03:55:18 | INFO  | Waiting for image to leave queued state... 2026-02-20 03:55:57.556512 | orchestrator | 2026-02-20 03:55:20 | INFO  | Waiting for import to complete... 2026-02-20 03:55:57.556529 | orchestrator | 2026-02-20 03:55:30 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-20 03:55:57.556561 | orchestrator | 2026-02-20 03:55:30 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-20 03:55:57.556596 | orchestrator | 2026-02-20 03:55:30 | INFO  | Setting internal_version = 0.6.2 2026-02-20 03:55:57.556632 | orchestrator | 2026-02-20 03:55:30 | INFO  | Setting image_original_user = cirros 2026-02-20 03:55:57.556668 | orchestrator | 2026-02-20 03:55:30 | INFO  | Adding tag os:cirros 2026-02-20 03:55:57.556703 | orchestrator | 2026-02-20 03:55:30 | INFO  | Setting property architecture: x86_64 2026-02-20 03:55:57.556737 | orchestrator | 2026-02-20 03:55:31 | INFO  | Setting property hw_disk_bus: scsi 2026-02-20 03:55:57.556773 | orchestrator | 2026-02-20 03:55:31 | INFO  | Setting property hw_rng_model: virtio 2026-02-20 03:55:57.556810 | orchestrator | 2026-02-20 03:55:31 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-20 03:55:57.556846 | orchestrator | 2026-02-20 03:55:32 | INFO  | Setting property hw_watchdog_action: reset 2026-02-20 03:55:57.556881 | orchestrator | 2026-02-20 03:55:32 | INFO  | Setting property hypervisor_type: qemu 2026-02-20 03:55:57.556918 | orchestrator | 2026-02-20 03:55:32 | INFO  | Setting property os_distro: cirros 2026-02-20 03:55:57.556952 | orchestrator | 2026-02-20 03:55:32 | INFO  | Setting property os_purpose: minimal 2026-02-20 03:55:57.556978 | orchestrator | 2026-02-20 03:55:33 | INFO  | Setting property replace_frequency: never 2026-02-20 03:55:57.557004 | orchestrator | 2026-02-20 03:55:33 | INFO  | Setting property uuid_validity: none 2026-02-20 03:55:57.557029 | orchestrator | 2026-02-20 03:55:33 | INFO  | Setting property provided_until: none 2026-02-20 03:55:57.557054 | orchestrator | 2026-02-20 03:55:33 | INFO  | Setting property image_description: Cirros 2026-02-20 03:55:57.557080 | orchestrator | 2026-02-20 03:55:34 | INFO  | Setting property image_name: Cirros 2026-02-20 03:55:57.557142 | orchestrator | 2026-02-20 03:55:34 | INFO  | Setting property internal_version: 0.6.2 2026-02-20 03:55:57.557169 | orchestrator | 2026-02-20 03:55:34 | INFO  | Setting property image_original_user: cirros 2026-02-20 03:55:57.557234 | orchestrator | 2026-02-20 03:55:35 | INFO  | Setting property os_version: 0.6.2 2026-02-20 03:55:57.557280 | orchestrator | 2026-02-20 03:55:35 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-20 03:55:57.557308 | orchestrator | 2026-02-20 03:55:35 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-20 03:55:57.557333 | orchestrator | 2026-02-20 03:55:35 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-20 03:55:57.557359 | orchestrator | 2026-02-20 03:55:35 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-20 03:55:57.557383 | orchestrator | 2026-02-20 03:55:35 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-20 03:55:57.557409 | orchestrator | 2026-02-20 03:55:36 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-20 03:55:57.557434 | orchestrator | 2026-02-20 03:55:36 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-20 03:55:57.557452 | orchestrator | 2026-02-20 03:55:36 | INFO  | Importing image Cirros 0.6.3 2026-02-20 03:55:57.557470 | orchestrator | 2026-02-20 03:55:36 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-20 03:55:57.557488 | orchestrator | 2026-02-20 03:55:37 | INFO  | Waiting for image to leave queued state... 2026-02-20 03:55:57.557505 | orchestrator | 2026-02-20 03:55:40 | INFO  | Waiting for import to complete... 2026-02-20 03:55:57.557551 | orchestrator | 2026-02-20 03:55:50 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-20 03:55:57.557570 | orchestrator | 2026-02-20 03:55:51 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-20 03:55:57.557589 | orchestrator | 2026-02-20 03:55:51 | INFO  | Setting internal_version = 0.6.3 2026-02-20 03:55:57.557607 | orchestrator | 2026-02-20 03:55:51 | INFO  | Setting image_original_user = cirros 2026-02-20 03:55:57.557624 | orchestrator | 2026-02-20 03:55:51 | INFO  | Adding tag os:cirros 2026-02-20 03:55:57.557636 | orchestrator | 2026-02-20 03:55:51 | INFO  | Setting property architecture: x86_64 2026-02-20 03:55:57.557647 | orchestrator | 2026-02-20 03:55:51 | INFO  | Setting property hw_disk_bus: scsi 2026-02-20 03:55:57.557657 | orchestrator | 2026-02-20 03:55:52 | INFO  | Setting property hw_rng_model: virtio 2026-02-20 03:55:57.557668 | orchestrator | 2026-02-20 03:55:52 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-20 03:55:57.557679 | orchestrator | 2026-02-20 03:55:52 | INFO  | Setting property hw_watchdog_action: reset 2026-02-20 03:55:57.557690 | orchestrator | 2026-02-20 03:55:52 | INFO  | Setting property hypervisor_type: qemu 2026-02-20 03:55:57.557701 | orchestrator | 2026-02-20 03:55:53 | INFO  | Setting property os_distro: cirros 2026-02-20 03:55:57.557712 | orchestrator | 2026-02-20 03:55:53 | INFO  | Setting property os_purpose: minimal 2026-02-20 03:55:57.557723 | orchestrator | 2026-02-20 03:55:53 | INFO  | Setting property replace_frequency: never 2026-02-20 03:55:57.557734 | orchestrator | 2026-02-20 03:55:53 | INFO  | Setting property uuid_validity: none 2026-02-20 03:55:57.557745 | orchestrator | 2026-02-20 03:55:54 | INFO  | Setting property provided_until: none 2026-02-20 03:55:57.557756 | orchestrator | 2026-02-20 03:55:54 | INFO  | Setting property image_description: Cirros 2026-02-20 03:55:57.557767 | orchestrator | 2026-02-20 03:55:55 | INFO  | Setting property image_name: Cirros 2026-02-20 03:55:57.557778 | orchestrator | 2026-02-20 03:55:55 | INFO  | Setting property internal_version: 0.6.3 2026-02-20 03:55:57.557803 | orchestrator | 2026-02-20 03:55:55 | INFO  | Setting property image_original_user: cirros 2026-02-20 03:55:57.557814 | orchestrator | 2026-02-20 03:55:55 | INFO  | Setting property os_version: 0.6.3 2026-02-20 03:55:57.557825 | orchestrator | 2026-02-20 03:55:56 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-20 03:55:57.557838 | orchestrator | 2026-02-20 03:55:56 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-20 03:55:57.557849 | orchestrator | 2026-02-20 03:55:56 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-20 03:55:57.557860 | orchestrator | 2026-02-20 03:55:56 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-20 03:55:57.557871 | orchestrator | 2026-02-20 03:55:56 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-20 03:55:57.833243 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-20 03:56:00.076450 | orchestrator | 2026-02-20 03:56:00 | INFO  | date: 2026-02-20 2026-02-20 03:56:00.076549 | orchestrator | 2026-02-20 03:56:00 | INFO  | image: octavia-amphora-haproxy-2024.2.20260220.qcow2 2026-02-20 03:56:00.076588 | orchestrator | 2026-02-20 03:56:00 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260220.qcow2 2026-02-20 03:56:00.077854 | orchestrator | 2026-02-20 03:56:00 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260220.qcow2.CHECKSUM 2026-02-20 03:56:00.310821 | orchestrator | 2026-02-20 03:56:00 | INFO  | checksum: f7cb3023a7ffd337dde7c3a2e7f60b79ba6f39adf196675c99d77afe5df3a086 2026-02-20 03:56:00.379920 | orchestrator | 2026-02-20 03:56:00 | INFO  | It takes a moment until task ea00df21-1e23-49a5-9c52-2c18b4a9161c (image-manager) has been started and output is visible here. 2026-02-20 03:57:13.690212 | orchestrator | 2026-02-20 03:56:02 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-20' 2026-02-20 03:57:13.690352 | orchestrator | 2026-02-20 03:56:03 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260220.qcow2: 200 2026-02-20 03:57:13.690382 | orchestrator | 2026-02-20 03:56:03 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-20 2026-02-20 03:57:13.690403 | orchestrator | 2026-02-20 03:56:03 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260220.qcow2 2026-02-20 03:57:13.690422 | orchestrator | 2026-02-20 03:56:04 | INFO  | Waiting for image to leave queued state... 2026-02-20 03:57:13.690442 | orchestrator | 2026-02-20 03:56:06 | INFO  | Waiting for import to complete... 2026-02-20 03:57:13.690461 | orchestrator | 2026-02-20 03:56:16 | INFO  | Waiting for import to complete... 2026-02-20 03:57:13.690479 | orchestrator | 2026-02-20 03:56:26 | INFO  | Waiting for import to complete... 2026-02-20 03:57:13.690498 | orchestrator | 2026-02-20 03:56:36 | INFO  | Waiting for import to complete... 2026-02-20 03:57:13.690520 | orchestrator | 2026-02-20 03:56:46 | INFO  | Waiting for import to complete... 2026-02-20 03:57:13.690542 | orchestrator | 2026-02-20 03:56:57 | INFO  | Waiting for import to complete... 2026-02-20 03:57:13.690561 | orchestrator | 2026-02-20 03:57:07 | INFO  | Import of 'OpenStack Octavia Amphora 2026-02-20' successfully completed, reloading images 2026-02-20 03:57:13.690582 | orchestrator | 2026-02-20 03:57:08 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-02-20' 2026-02-20 03:57:13.690634 | orchestrator | 2026-02-20 03:57:08 | INFO  | Setting internal_version = 2026-02-20 2026-02-20 03:57:13.690657 | orchestrator | 2026-02-20 03:57:08 | INFO  | Setting image_original_user = ubuntu 2026-02-20 03:57:13.690676 | orchestrator | 2026-02-20 03:57:08 | INFO  | Adding tag amphora 2026-02-20 03:57:13.690694 | orchestrator | 2026-02-20 03:57:08 | INFO  | Adding tag os:ubuntu 2026-02-20 03:57:13.690714 | orchestrator | 2026-02-20 03:57:08 | INFO  | Setting property architecture: x86_64 2026-02-20 03:57:13.690735 | orchestrator | 2026-02-20 03:57:08 | INFO  | Setting property hw_disk_bus: scsi 2026-02-20 03:57:13.690756 | orchestrator | 2026-02-20 03:57:08 | INFO  | Setting property hw_rng_model: virtio 2026-02-20 03:57:13.690778 | orchestrator | 2026-02-20 03:57:09 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-20 03:57:13.690800 | orchestrator | 2026-02-20 03:57:09 | INFO  | Setting property hw_watchdog_action: reset 2026-02-20 03:57:13.690814 | orchestrator | 2026-02-20 03:57:09 | INFO  | Setting property hypervisor_type: qemu 2026-02-20 03:57:13.690827 | orchestrator | 2026-02-20 03:57:10 | INFO  | Setting property os_distro: ubuntu 2026-02-20 03:57:13.690840 | orchestrator | 2026-02-20 03:57:10 | INFO  | Setting property replace_frequency: quarterly 2026-02-20 03:57:13.690852 | orchestrator | 2026-02-20 03:57:10 | INFO  | Setting property uuid_validity: last-1 2026-02-20 03:57:13.690865 | orchestrator | 2026-02-20 03:57:10 | INFO  | Setting property provided_until: none 2026-02-20 03:57:13.690877 | orchestrator | 2026-02-20 03:57:11 | INFO  | Setting property os_purpose: network 2026-02-20 03:57:13.690906 | orchestrator | 2026-02-20 03:57:11 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-02-20 03:57:13.690919 | orchestrator | 2026-02-20 03:57:11 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-02-20 03:57:13.690932 | orchestrator | 2026-02-20 03:57:11 | INFO  | Setting property internal_version: 2026-02-20 2026-02-20 03:57:13.690944 | orchestrator | 2026-02-20 03:57:12 | INFO  | Setting property image_original_user: ubuntu 2026-02-20 03:57:13.690957 | orchestrator | 2026-02-20 03:57:12 | INFO  | Setting property os_version: 2026-02-20 2026-02-20 03:57:13.690971 | orchestrator | 2026-02-20 03:57:12 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260220.qcow2 2026-02-20 03:57:13.690983 | orchestrator | 2026-02-20 03:57:12 | INFO  | Setting property image_build_date: 2026-02-20 2026-02-20 03:57:13.691035 | orchestrator | 2026-02-20 03:57:13 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-02-20' 2026-02-20 03:57:13.691047 | orchestrator | 2026-02-20 03:57:13 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-02-20' 2026-02-20 03:57:13.691083 | orchestrator | 2026-02-20 03:57:13 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-20 03:57:13.691103 | orchestrator | 2026-02-20 03:57:13 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-20 03:57:13.691124 | orchestrator | 2026-02-20 03:57:13 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-20 03:57:13.691144 | orchestrator | 2026-02-20 03:57:13 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-20 03:57:14.116883 | orchestrator | ok: Runtime: 0:03:06.221678 2026-02-20 03:57:14.134640 | 2026-02-20 03:57:14.134781 | TASK [Run checks] 2026-02-20 03:57:14.887303 | orchestrator | + set -e 2026-02-20 03:57:14.887470 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-20 03:57:14.887488 | orchestrator | ++ export INTERACTIVE=false 2026-02-20 03:57:14.887504 | orchestrator | ++ INTERACTIVE=false 2026-02-20 03:57:14.887514 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-20 03:57:14.887523 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-20 03:57:14.887535 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-20 03:57:14.888692 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-20 03:57:14.893185 | orchestrator | 2026-02-20 03:57:14.893237 | orchestrator | # CHECK 2026-02-20 03:57:14.893247 | orchestrator | 2026-02-20 03:57:14.893256 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-20 03:57:14.893269 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-20 03:57:14.893278 | orchestrator | + echo 2026-02-20 03:57:14.893287 | orchestrator | + echo '# CHECK' 2026-02-20 03:57:14.893295 | orchestrator | + echo 2026-02-20 03:57:14.893307 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-20 03:57:14.894232 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-20 03:57:14.950159 | orchestrator | 2026-02-20 03:57:14.950260 | orchestrator | ## Containers @ testbed-manager 2026-02-20 03:57:14.950274 | orchestrator | 2026-02-20 03:57:14.950286 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-20 03:57:14.950296 | orchestrator | + echo 2026-02-20 03:57:14.950306 | orchestrator | + echo '## Containers @ testbed-manager' 2026-02-20 03:57:14.950315 | orchestrator | + echo 2026-02-20 03:57:14.950325 | orchestrator | + osism container testbed-manager ps 2026-02-20 03:57:16.854606 | orchestrator | 2026-02-20 03:57:16 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-02-20 03:57:17.218459 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-20 03:57:17.218596 | orchestrator | 32700ccc47ed registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-02-20 03:57:17.218628 | orchestrator | 07f5c7914b1b registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-02-20 03:57:17.218643 | orchestrator | 63f3d6e6512d registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-20 03:57:17.218653 | orchestrator | 66b88a6234b8 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-20 03:57:17.218663 | orchestrator | 7de2dd21a460 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-02-20 03:57:17.218676 | orchestrator | 5f5feae47b8a registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 57 minutes ago Up 56 minutes cephclient 2026-02-20 03:57:17.218686 | orchestrator | 49c57bf7379a registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-20 03:57:17.218695 | orchestrator | 0ca2403915a1 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-20 03:57:17.218728 | orchestrator | abaeec4750a6 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-20 03:57:17.218738 | orchestrator | 51346a6acd08 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-02-20 03:57:17.218747 | orchestrator | 4cb86cd3309f phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-02-20 03:57:17.218756 | orchestrator | 42753d1b6844 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-02-20 03:57:17.218766 | orchestrator | acee7d9653d2 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-02-20 03:57:17.218775 | orchestrator | 3ab8f3be2a32 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-02-20 03:57:17.218803 | orchestrator | f28b5e4d7cf4 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-02-20 03:57:17.218822 | orchestrator | f79b0e5a7fed registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-02-20 03:57:17.218831 | orchestrator | dfccf86bf16d registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-02-20 03:57:17.218840 | orchestrator | b2e6924d206a registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-02-20 03:57:17.218849 | orchestrator | 45d49cbad0cd registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-02-20 03:57:17.218859 | orchestrator | b50d2bafe678 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-02-20 03:57:17.219379 | orchestrator | 98eefe461fc6 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-02-20 03:57:17.219409 | orchestrator | c7e3850f953c registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-02-20 03:57:17.219439 | orchestrator | ba4a25ff6d60 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-02-20 03:57:17.219455 | orchestrator | b5893f824689 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-02-20 03:57:17.220112 | orchestrator | cb20f7b28ce4 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-02-20 03:57:17.220658 | orchestrator | 00d7c7ac40af registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-02-20 03:57:17.221430 | orchestrator | a4ef28775236 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-02-20 03:57:17.221460 | orchestrator | 0bb115efac5f registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-02-20 03:57:17.221474 | orchestrator | 140fdb42bcde registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-02-20 03:57:17.221507 | orchestrator | 2a11596312a1 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-02-20 03:57:17.487496 | orchestrator | 2026-02-20 03:57:17.487616 | orchestrator | ## Images @ testbed-manager 2026-02-20 03:57:17.487640 | orchestrator | 2026-02-20 03:57:17.487665 | orchestrator | + echo 2026-02-20 03:57:17.487693 | orchestrator | + echo '## Images @ testbed-manager' 2026-02-20 03:57:17.487712 | orchestrator | + echo 2026-02-20 03:57:17.487734 | orchestrator | + osism container testbed-manager images 2026-02-20 03:57:19.771637 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-20 03:57:19.771756 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 f0399b330f69 24 hours ago 239MB 2026-02-20 03:57:19.771773 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 3 weeks ago 41.4MB 2026-02-20 03:57:19.771786 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 2 months ago 11.5MB 2026-02-20 03:57:19.771797 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 2 months ago 608MB 2026-02-20 03:57:19.771808 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-20 03:57:19.771819 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-20 03:57:19.771830 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-20 03:57:19.771844 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 2 months ago 308MB 2026-02-20 03:57:19.771855 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-20 03:57:19.771898 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 2 months ago 404MB 2026-02-20 03:57:19.771910 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 2 months ago 839MB 2026-02-20 03:57:19.771921 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-20 03:57:19.771932 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 2 months ago 330MB 2026-02-20 03:57:19.771943 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 2 months ago 613MB 2026-02-20 03:57:19.771954 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 2 months ago 560MB 2026-02-20 03:57:19.771965 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 2 months ago 1.23GB 2026-02-20 03:57:19.771976 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 2 months ago 383MB 2026-02-20 03:57:19.772017 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 2 months ago 238MB 2026-02-20 03:57:19.772029 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 3 months ago 334MB 2026-02-20 03:57:19.772040 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 4 months ago 742MB 2026-02-20 03:57:19.772051 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 5 months ago 275MB 2026-02-20 03:57:19.772062 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 7 months ago 226MB 2026-02-20 03:57:19.772073 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 9 months ago 453MB 2026-02-20 03:57:19.772084 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 20 months ago 146MB 2026-02-20 03:57:19.772095 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-02-20 03:57:20.096346 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-20 03:57:20.096515 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-20 03:57:20.126139 | orchestrator | 2026-02-20 03:57:20.126237 | orchestrator | ## Containers @ testbed-node-0 2026-02-20 03:57:20.126252 | orchestrator | 2026-02-20 03:57:20.126264 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-20 03:57:20.126276 | orchestrator | + echo 2026-02-20 03:57:20.126288 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-02-20 03:57:20.126300 | orchestrator | + echo 2026-02-20 03:57:20.126311 | orchestrator | + osism container testbed-node-0 ps 2026-02-20 03:57:22.494913 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-20 03:57:22.495101 | orchestrator | 3b5b68d5963f registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-20 03:57:22.495163 | orchestrator | 6998f86966ed registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-20 03:57:22.495188 | orchestrator | c644f2085d9b registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-02-20 03:57:22.495208 | orchestrator | 0094faafb164 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-20 03:57:22.495257 | orchestrator | 071165e728d1 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-20 03:57:22.495276 | orchestrator | 8bbf532699f8 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-20 03:57:22.495304 | orchestrator | 09e26ac30ab3 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-20 03:57:22.495320 | orchestrator | 6bd83905dea2 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-20 03:57:22.495331 | orchestrator | 65ab38a9dacb registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-20 03:57:22.495343 | orchestrator | d96224af8eed registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-20 03:57:22.495354 | orchestrator | 0234575501f1 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-20 03:57:22.495365 | orchestrator | c1408b472dd9 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-02-20 03:57:22.495376 | orchestrator | ae0cfe1467ad registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-20 03:57:22.495387 | orchestrator | c85c6bff1042 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-02-20 03:57:22.495398 | orchestrator | 98664262098c registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-02-20 03:57:22.495416 | orchestrator | bd4f00c2095f registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-20 03:57:22.495441 | orchestrator | 26b058cc21d1 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-20 03:57:22.495463 | orchestrator | 4bf0cc4f336a registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-02-20 03:57:22.495481 | orchestrator | 22d773db2961 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-20 03:57:22.495534 | orchestrator | 4fc974753041 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-20 03:57:22.495555 | orchestrator | 0051edec5ddd registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-20 03:57:22.495575 | orchestrator | 12769e68ef30 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-20 03:57:22.495605 | orchestrator | 989d9b3b92de registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-02-20 03:57:22.495617 | orchestrator | 8b5883ee04dc registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-20 03:57:22.495628 | orchestrator | 5d209f92a614 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-20 03:57:22.495645 | orchestrator | 2227e831067e registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-20 03:57:22.495656 | orchestrator | d9e852b9306c registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-20 03:57:22.495667 | orchestrator | c5ef3b4d0413 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-02-20 03:57:22.495679 | orchestrator | 453a24df2a69 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-02-20 03:57:22.495690 | orchestrator | 5ef0226ad51b registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-02-20 03:57:22.495701 | orchestrator | 4b0874956ba8 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-02-20 03:57:22.495713 | orchestrator | 5b716554f4c4 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-02-20 03:57:22.495724 | orchestrator | 25605db6116c registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-20 03:57:22.496102 | orchestrator | 52be80805841 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-02-20 03:57:22.496122 | orchestrator | 6aac24740b7d registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-20 03:57:22.496134 | orchestrator | 2a6e27b55f59 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-20 03:57:22.496145 | orchestrator | c44634a65309 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-20 03:57:22.496156 | orchestrator | 2a913670ca3f registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_console 2026-02-20 03:57:22.496167 | orchestrator | f74eedb5edb0 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-20 03:57:22.496178 | orchestrator | f273231fca5f registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-02-20 03:57:22.496199 | orchestrator | 770625b87ad5 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-02-20 03:57:22.496210 | orchestrator | 3e0f06476064 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-02-20 03:57:22.496229 | orchestrator | 6328d9b64c96 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_api 2026-02-20 03:57:22.496240 | orchestrator | 17b998ebec87 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-02-20 03:57:22.496251 | orchestrator | 4b3ab2079c58 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-02-20 03:57:22.496263 | orchestrator | 5eb01acfcaff registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-02-20 03:57:22.496273 | orchestrator | a27c822eb1dc registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-02-20 03:57:22.496284 | orchestrator | c7e2c3abaa52 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-02-20 03:57:22.496296 | orchestrator | 2a43994f2c62 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-02-20 03:57:22.496306 | orchestrator | 30ae3a08aa21 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 55 minutes ago Up 55 minutes ceph-mgr-testbed-node-0 2026-02-20 03:57:22.496317 | orchestrator | 01404f8a479f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-02-20 03:57:22.496328 | orchestrator | c9a9a7d69b4c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-02-20 03:57:22.496339 | orchestrator | c7a20da6facd registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-20 03:57:22.496362 | orchestrator | d9e0d0ad5e48 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-20 03:57:22.496388 | orchestrator | fcf59ab249f9 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-20 03:57:22.496409 | orchestrator | b87bed921771 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-20 03:57:22.496433 | orchestrator | c09b591a7a07 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-20 03:57:22.496450 | orchestrator | 586c356d7429 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-20 03:57:22.496479 | orchestrator | aeb8fb27d298 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-20 03:57:22.496497 | orchestrator | 4380f6baaae1 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-20 03:57:22.496514 | orchestrator | 581e8aa51153 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-20 03:57:22.496532 | orchestrator | 2e4c1690e920 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-20 03:57:22.496548 | orchestrator | 5b781f484aee registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-20 03:57:22.496567 | orchestrator | 10629914a572 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-20 03:57:22.496586 | orchestrator | e6302aa98475 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-20 03:57:22.496601 | orchestrator | ac7572448fb5 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-20 03:57:22.496612 | orchestrator | a8a5a01a52d1 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-20 03:57:22.496623 | orchestrator | e5aee6623aae registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-20 03:57:22.496635 | orchestrator | dd40f5acba1c registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-20 03:57:22.496645 | orchestrator | 421b03ae1cdc registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-20 03:57:22.496656 | orchestrator | 72c0f22e1ec8 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-20 03:57:22.758136 | orchestrator | 2026-02-20 03:57:22.758273 | orchestrator | ## Images @ testbed-node-0 2026-02-20 03:57:22.758301 | orchestrator | 2026-02-20 03:57:22.758320 | orchestrator | + echo 2026-02-20 03:57:22.758339 | orchestrator | + echo '## Images @ testbed-node-0' 2026-02-20 03:57:22.758359 | orchestrator | + echo 2026-02-20 03:57:22.758378 | orchestrator | + osism container testbed-node-0 images 2026-02-20 03:57:25.121756 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-20 03:57:25.121888 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-20 03:57:25.121905 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-20 03:57:25.121917 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-20 03:57:25.121928 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-20 03:57:25.121960 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-20 03:57:25.122071 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-20 03:57:25.122087 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-20 03:57:25.122098 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-20 03:57:25.122109 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-20 03:57:25.122119 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-20 03:57:25.122130 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-20 03:57:25.122141 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-20 03:57:25.122152 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-20 03:57:25.122163 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-20 03:57:25.122174 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-20 03:57:25.122185 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-20 03:57:25.122196 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-20 03:57:25.122207 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-20 03:57:25.122217 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-20 03:57:25.122228 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-20 03:57:25.122239 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-20 03:57:25.122250 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-20 03:57:25.122261 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-20 03:57:25.122272 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-20 03:57:25.122283 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-20 03:57:25.122293 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-20 03:57:25.122304 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-20 03:57:25.122323 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-20 03:57:25.122334 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-20 03:57:25.122345 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-20 03:57:25.122365 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-20 03:57:25.122396 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-20 03:57:25.122408 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-20 03:57:25.122419 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-20 03:57:25.122429 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-20 03:57:25.122440 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-20 03:57:25.122451 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-20 03:57:25.122462 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-20 03:57:25.122472 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-20 03:57:25.122483 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-20 03:57:25.122494 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-20 03:57:25.122505 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-20 03:57:25.122516 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-20 03:57:25.122527 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-20 03:57:25.122538 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-20 03:57:25.122549 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-20 03:57:25.122560 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-20 03:57:25.122571 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-20 03:57:25.122582 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-20 03:57:25.122593 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-20 03:57:25.122604 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-20 03:57:25.122614 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-20 03:57:25.122625 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-20 03:57:25.122636 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-20 03:57:25.122647 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-20 03:57:25.122658 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-20 03:57:25.122676 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-20 03:57:25.122687 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-20 03:57:25.122703 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-20 03:57:25.122714 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-20 03:57:25.122725 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-20 03:57:25.122736 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-20 03:57:25.122747 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-20 03:57:25.122764 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-20 03:57:25.122776 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-20 03:57:25.122787 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-20 03:57:25.122797 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-20 03:57:25.122808 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-20 03:57:25.122819 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-20 03:57:25.384577 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-20 03:57:25.384731 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-20 03:57:25.417511 | orchestrator | 2026-02-20 03:57:25.417601 | orchestrator | ## Containers @ testbed-node-1 2026-02-20 03:57:25.417621 | orchestrator | 2026-02-20 03:57:25.417633 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-20 03:57:25.417644 | orchestrator | + echo 2026-02-20 03:57:25.417656 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-02-20 03:57:25.417667 | orchestrator | + echo 2026-02-20 03:57:25.417679 | orchestrator | + osism container testbed-node-1 ps 2026-02-20 03:57:27.741607 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-20 03:57:27.741713 | orchestrator | 019ec3795263 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-20 03:57:27.741729 | orchestrator | 88c96abf4048 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-20 03:57:27.741741 | orchestrator | e99908fb5917 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-20 03:57:27.741752 | orchestrator | c68a534f4d75 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-20 03:57:27.741765 | orchestrator | 84fa607d2b1d registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-20 03:57:27.741776 | orchestrator | 7886dfba868b registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-20 03:57:27.741812 | orchestrator | 96e63a67470b registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-20 03:57:27.741823 | orchestrator | 2461c7e07ed5 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-20 03:57:27.741835 | orchestrator | 46e4211b55c5 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-20 03:57:27.741846 | orchestrator | 983e70bed036 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-20 03:57:27.741857 | orchestrator | 6b0d0abb3dbf registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-20 03:57:27.741868 | orchestrator | 45f1e57d875c registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-02-20 03:57:27.741896 | orchestrator | a7c9072e1644 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-20 03:57:27.741908 | orchestrator | fc111d30063d registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-02-20 03:57:27.741919 | orchestrator | 39f51450ede9 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-02-20 03:57:27.741938 | orchestrator | 12d498295755 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-20 03:57:27.741956 | orchestrator | 2d9f855912c1 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-20 03:57:27.742135 | orchestrator | 458c7d10869d registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-02-20 03:57:27.742158 | orchestrator | d62041206a11 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-20 03:57:27.742204 | orchestrator | 0549d7dd795a registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-20 03:57:27.742224 | orchestrator | f02f6179b2e3 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-20 03:57:27.742240 | orchestrator | acf5bc40f312 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-20 03:57:27.742251 | orchestrator | c3461f1d17f0 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-02-20 03:57:27.742618 | orchestrator | 9e68fd94bd54 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-20 03:57:27.742651 | orchestrator | e0b165dfb93c registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-20 03:57:27.742662 | orchestrator | 5cecd239f1ba registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-20 03:57:27.742673 | orchestrator | 82764d74b0fb registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-20 03:57:27.742684 | orchestrator | 0b4a5aa91e46 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-02-20 03:57:27.742695 | orchestrator | c08262502c03 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-02-20 03:57:27.742706 | orchestrator | 5e2f04a018d2 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-02-20 03:57:27.742716 | orchestrator | 7ccebab11fb3 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-02-20 03:57:27.742728 | orchestrator | 8c96f5a86bbb registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-02-20 03:57:27.742738 | orchestrator | ca4f43326f84 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-20 03:57:27.742749 | orchestrator | b9f4f852228d registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-02-20 03:57:27.742760 | orchestrator | f981ce49c524 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-20 03:57:27.742771 | orchestrator | cf45d47421fd registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-20 03:57:27.742792 | orchestrator | a31812ac2783 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-20 03:57:27.742803 | orchestrator | 45fd5624fd07 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_console 2026-02-20 03:57:27.742814 | orchestrator | 5fa08013cc0e registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-20 03:57:27.742825 | orchestrator | 4dcbb61172f3 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-02-20 03:57:27.742836 | orchestrator | 3dea172eca34 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-02-20 03:57:27.742854 | orchestrator | 15a3536c32e7 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-02-20 03:57:27.742865 | orchestrator | a344483389a3 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_api 2026-02-20 03:57:27.742883 | orchestrator | d811e6ed062d registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_scheduler 2026-02-20 03:57:27.742895 | orchestrator | 88bd075c8e66 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 47 minutes ago Up 47 minutes (healthy) neutron_server 2026-02-20 03:57:27.742906 | orchestrator | 8cf151a0d206 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-02-20 03:57:27.742916 | orchestrator | d1e71aba3eca registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-02-20 03:57:27.742927 | orchestrator | 2da1492af515 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-02-20 03:57:27.742938 | orchestrator | 173bf3aa1413 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_ssh 2026-02-20 03:57:27.742949 | orchestrator | ccb5d50bbee3 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 55 minutes ago Up 55 minutes ceph-mgr-testbed-node-1 2026-02-20 03:57:27.742960 | orchestrator | 019c891f913a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-02-20 03:57:27.743012 | orchestrator | b179183cbe33 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-02-20 03:57:27.743024 | orchestrator | 89b507105423 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-20 03:57:27.743036 | orchestrator | 0d5f05e6d996 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-20 03:57:27.743046 | orchestrator | 3e1eb52d9cda registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-20 03:57:27.743057 | orchestrator | e8abc4c036af registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-20 03:57:27.743068 | orchestrator | 3028f92cb117 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-20 03:57:27.743162 | orchestrator | f30b77b0022b registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-20 03:57:27.743175 | orchestrator | 5213da47e0f5 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-20 03:57:27.743194 | orchestrator | 0bad7fbce47f registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-20 03:57:27.743206 | orchestrator | 37f7ae699a20 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-20 03:57:27.743217 | orchestrator | 74efaa387e3e registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-20 03:57:27.743228 | orchestrator | 1661049cc3c9 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-20 03:57:27.743239 | orchestrator | 38880ccd2fff registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-20 03:57:27.743272 | orchestrator | 30e4dfc626c7 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-20 03:57:27.743285 | orchestrator | 2c2c42cb01cd registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-20 03:57:27.743295 | orchestrator | b775fa33557d registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-20 03:57:27.743306 | orchestrator | 3eb6ac2241e4 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-20 03:57:27.743317 | orchestrator | 131098ab03e7 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-20 03:57:27.743333 | orchestrator | b550cf769fda registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-20 03:57:27.743344 | orchestrator | 11a8d2b2320f registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-20 03:57:28.010164 | orchestrator | 2026-02-20 03:57:28.010285 | orchestrator | ## Images @ testbed-node-1 2026-02-20 03:57:28.010305 | orchestrator | 2026-02-20 03:57:28.010321 | orchestrator | + echo 2026-02-20 03:57:28.010337 | orchestrator | + echo '## Images @ testbed-node-1' 2026-02-20 03:57:28.010355 | orchestrator | + echo 2026-02-20 03:57:28.010371 | orchestrator | + osism container testbed-node-1 images 2026-02-20 03:57:30.276126 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-20 03:57:30.276927 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-20 03:57:30.276962 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-20 03:57:30.277006 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-20 03:57:30.277021 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-20 03:57:30.277035 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-20 03:57:30.277048 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-20 03:57:30.277086 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-20 03:57:30.277099 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-20 03:57:30.277111 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-20 03:57:30.277124 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-20 03:57:30.277136 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-20 03:57:30.277151 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-20 03:57:30.277169 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-20 03:57:30.277188 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-20 03:57:30.277203 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-20 03:57:30.277224 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-20 03:57:30.277251 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-20 03:57:30.277268 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-20 03:57:30.277285 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-20 03:57:30.277302 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-20 03:57:30.277319 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-20 03:57:30.277337 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-20 03:57:30.277353 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-20 03:57:30.277370 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-20 03:57:30.277388 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-20 03:57:30.277406 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-20 03:57:30.277425 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-20 03:57:30.277443 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-20 03:57:30.277463 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-20 03:57:30.277482 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-20 03:57:30.277499 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-20 03:57:30.277535 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-20 03:57:30.277558 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-20 03:57:30.277569 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-20 03:57:30.277580 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-20 03:57:30.277591 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-20 03:57:30.277602 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-20 03:57:30.277631 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-20 03:57:30.277643 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-20 03:57:30.277654 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-20 03:57:30.277665 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-20 03:57:30.277675 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-20 03:57:30.277686 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-20 03:57:30.277697 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-20 03:57:30.277708 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-20 03:57:30.277719 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-20 03:57:30.277730 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-20 03:57:30.277741 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-20 03:57:30.277752 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-20 03:57:30.277763 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-20 03:57:30.277774 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-20 03:57:30.277784 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-20 03:57:30.277795 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-20 03:57:30.277806 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-20 03:57:30.277817 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-20 03:57:30.277828 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-20 03:57:30.277839 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-20 03:57:30.277849 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-20 03:57:30.277860 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-20 03:57:30.277878 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-20 03:57:30.277889 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-20 03:57:30.277900 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-20 03:57:30.277911 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-20 03:57:30.277929 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-20 03:57:30.277941 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-20 03:57:30.277951 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-20 03:57:30.277962 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-20 03:57:30.278112 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-20 03:57:30.278125 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-20 03:57:30.563962 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-20 03:57:30.564154 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-20 03:57:30.623407 | orchestrator | 2026-02-20 03:57:30.623515 | orchestrator | ## Containers @ testbed-node-2 2026-02-20 03:57:30.623531 | orchestrator | 2026-02-20 03:57:30.623543 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-20 03:57:30.623554 | orchestrator | + echo 2026-02-20 03:57:30.623566 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-02-20 03:57:30.623578 | orchestrator | + echo 2026-02-20 03:57:30.623590 | orchestrator | + osism container testbed-node-2 ps 2026-02-20 03:57:32.978574 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-20 03:57:32.978705 | orchestrator | 2b0928ecb981 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-20 03:57:32.978739 | orchestrator | 3953a9b47e4f registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-20 03:57:32.978763 | orchestrator | 7f647a334c90 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-20 03:57:32.978782 | orchestrator | b0d464fdbf0e registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-20 03:57:32.978803 | orchestrator | 71c4b5e4c099 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-20 03:57:32.978820 | orchestrator | 98cbefb83c0f registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-20 03:57:32.978838 | orchestrator | 7b1ae785ff47 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-20 03:57:32.978856 | orchestrator | cb135e1198b1 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-20 03:57:32.978909 | orchestrator | 59b2910c22e5 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-20 03:57:32.979043 | orchestrator | 21f1cc735996 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-20 03:57:32.979063 | orchestrator | 671ed9f91012 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-20 03:57:32.979083 | orchestrator | 9599dcbe04de registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-02-20 03:57:32.979127 | orchestrator | b14a88b660dd registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-20 03:57:32.979151 | orchestrator | e192e0e55e48 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-02-20 03:57:32.979170 | orchestrator | c7af8c172586 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-02-20 03:57:32.979183 | orchestrator | 9e71209a5884 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-20 03:57:32.979196 | orchestrator | f5007dc2f57b registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-20 03:57:32.979209 | orchestrator | f59e990a6ff4 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-02-20 03:57:32.979222 | orchestrator | 5e9d0fbd352e registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-20 03:57:32.979258 | orchestrator | b2018fbb59ab registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-20 03:57:32.979272 | orchestrator | 22b45df8783d registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-20 03:57:32.979284 | orchestrator | ec51fb324480 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-20 03:57:32.979297 | orchestrator | 0bd601c78b9a registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-02-20 03:57:32.979310 | orchestrator | fd70d1aa1420 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-20 03:57:32.979323 | orchestrator | 97126669a156 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-20 03:57:32.979348 | orchestrator | 85a93ffccd54 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-20 03:57:32.979361 | orchestrator | 0333a5018fe4 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-20 03:57:32.979374 | orchestrator | 9079c35ad66a registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-02-20 03:57:32.979387 | orchestrator | 6324c09602ad registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-02-20 03:57:32.979400 | orchestrator | 22e76dc682ff registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-02-20 03:57:32.979413 | orchestrator | ac18d23cb8ec registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-02-20 03:57:32.979426 | orchestrator | 10a16f409fd3 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-02-20 03:57:32.979437 | orchestrator | 613fa9b48b6c registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-20 03:57:32.979448 | orchestrator | bb3f10b8ada6 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-02-20 03:57:32.979459 | orchestrator | 8d3f9f026704 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-20 03:57:32.979470 | orchestrator | 8f7b22bb6a5c registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-20 03:57:32.979480 | orchestrator | fb492c66ebc4 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-20 03:57:32.979491 | orchestrator | 797fea1b1b5a registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_console 2026-02-20 03:57:32.979502 | orchestrator | d64470a0910d registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-20 03:57:32.979521 | orchestrator | 2ad5504d01b5 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-02-20 03:57:32.979536 | orchestrator | c81caaf867e6 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-02-20 03:57:32.979561 | orchestrator | a763b697e859 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-02-20 03:57:32.979585 | orchestrator | 14c0030d2185 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_api 2026-02-20 03:57:32.979650 | orchestrator | b399d470db82 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_scheduler 2026-02-20 03:57:32.979667 | orchestrator | 66f811b5920f registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 47 minutes ago Up 47 minutes (healthy) neutron_server 2026-02-20 03:57:32.979685 | orchestrator | b50b93c36687 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-02-20 03:57:32.979703 | orchestrator | 0eaf14b7fea0 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-02-20 03:57:32.979719 | orchestrator | 578f237b2976 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-02-20 03:57:32.979736 | orchestrator | 105a8fc478b0 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_ssh 2026-02-20 03:57:32.979755 | orchestrator | 1f753c94ca37 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 55 minutes ago Up 55 minutes ceph-mgr-testbed-node-2 2026-02-20 03:57:32.979773 | orchestrator | 0986ad7be83e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-02-20 03:57:32.979803 | orchestrator | 28a82f95a8fd registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-02-20 03:57:32.979823 | orchestrator | 0130a132edcc registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-20 03:57:32.979843 | orchestrator | 014d1533ec33 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-20 03:57:32.979855 | orchestrator | 5c5b79af5956 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-20 03:57:32.979866 | orchestrator | 1df6be7f2c5e registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-20 03:57:32.979877 | orchestrator | deaf92eea100 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-20 03:57:32.979888 | orchestrator | 977952f29a7a registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-20 03:57:32.979899 | orchestrator | 20e128f90d59 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-20 03:57:32.979921 | orchestrator | 9d2325dd51e3 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-20 03:57:32.979943 | orchestrator | 2d8650a638c1 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-20 03:57:32.980161 | orchestrator | dafef211ae75 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-20 03:57:32.980184 | orchestrator | d063ca4e3694 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-20 03:57:32.980196 | orchestrator | b9bc412b3194 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-20 03:57:32.980207 | orchestrator | 86dc45909122 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-20 03:57:32.980218 | orchestrator | 3a2d2e151b17 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-20 03:57:32.980229 | orchestrator | 0b8efd43c993 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-20 03:57:32.980239 | orchestrator | ac4bcfee3847 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-20 03:57:32.980250 | orchestrator | 5f1c091bc1c6 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-20 03:57:32.980261 | orchestrator | 4d4ba46129cd registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-20 03:57:32.980272 | orchestrator | d6e1fee6693a registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-20 03:57:33.255805 | orchestrator | 2026-02-20 03:57:33.255922 | orchestrator | ## Images @ testbed-node-2 2026-02-20 03:57:33.255948 | orchestrator | 2026-02-20 03:57:33.255996 | orchestrator | + echo 2026-02-20 03:57:33.256011 | orchestrator | + echo '## Images @ testbed-node-2' 2026-02-20 03:57:33.256023 | orchestrator | + echo 2026-02-20 03:57:33.256035 | orchestrator | + osism container testbed-node-2 images 2026-02-20 03:57:35.603518 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-20 03:57:35.603624 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-20 03:57:35.603639 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-20 03:57:35.603651 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-20 03:57:35.603679 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-20 03:57:35.603740 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-20 03:57:35.603754 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-20 03:57:35.603765 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-20 03:57:35.603777 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-20 03:57:35.603809 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-20 03:57:35.603821 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-20 03:57:35.603837 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-20 03:57:35.603848 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-20 03:57:35.603859 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-20 03:57:35.603871 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-20 03:57:35.603882 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-20 03:57:35.603893 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-20 03:57:35.603904 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-20 03:57:35.603914 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-20 03:57:35.603926 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-20 03:57:35.603937 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-20 03:57:35.603948 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-20 03:57:35.603983 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-20 03:57:35.604004 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-20 03:57:35.604023 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-20 03:57:35.604042 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-20 03:57:35.604059 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-20 03:57:35.604077 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-20 03:57:35.604088 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-20 03:57:35.604100 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-20 03:57:35.604118 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-20 03:57:35.604136 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-20 03:57:35.604178 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-20 03:57:35.604198 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-20 03:57:35.604214 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-20 03:57:35.604231 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-20 03:57:35.604262 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-20 03:57:35.604278 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-20 03:57:35.604294 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-20 03:57:35.604322 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-20 03:57:35.604341 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-20 03:57:35.604360 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-20 03:57:35.604380 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-20 03:57:35.604398 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-20 03:57:35.604415 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-20 03:57:35.604427 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-20 03:57:35.604438 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-20 03:57:35.604449 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-20 03:57:35.604460 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-20 03:57:35.604579 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-20 03:57:35.604594 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-20 03:57:35.604608 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-20 03:57:35.604628 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-20 03:57:35.604647 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-20 03:57:35.604663 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-20 03:57:35.604682 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-20 03:57:35.604701 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-20 03:57:35.604719 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-20 03:57:35.604736 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-20 03:57:35.604755 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-20 03:57:35.604773 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-20 03:57:35.604792 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-20 03:57:35.604830 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-20 03:57:35.604851 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-20 03:57:35.604870 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-20 03:57:35.604889 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-20 03:57:35.604901 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-20 03:57:35.604912 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-20 03:57:35.604938 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-20 03:57:35.604950 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-20 03:57:35.884304 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-02-20 03:57:35.892073 | orchestrator | + set -e 2026-02-20 03:57:35.892151 | orchestrator | + source /opt/manager-vars.sh 2026-02-20 03:57:35.892166 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-20 03:57:35.893059 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-20 03:57:35.893157 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-20 03:57:35.893171 | orchestrator | ++ CEPH_VERSION=reef 2026-02-20 03:57:35.893183 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-20 03:57:35.893195 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-20 03:57:35.893207 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-20 03:57:35.893218 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-20 03:57:35.893229 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-20 03:57:35.893241 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-20 03:57:35.893251 | orchestrator | ++ export ARA=false 2026-02-20 03:57:35.893263 | orchestrator | ++ ARA=false 2026-02-20 03:57:35.893274 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-20 03:57:35.893285 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-20 03:57:35.893296 | orchestrator | ++ export TEMPEST=false 2026-02-20 03:57:35.893308 | orchestrator | ++ TEMPEST=false 2026-02-20 03:57:35.893319 | orchestrator | ++ export IS_ZUUL=true 2026-02-20 03:57:35.893329 | orchestrator | ++ IS_ZUUL=true 2026-02-20 03:57:35.893341 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 03:57:35.893352 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 03:57:35.893363 | orchestrator | ++ export EXTERNAL_API=false 2026-02-20 03:57:35.893374 | orchestrator | ++ EXTERNAL_API=false 2026-02-20 03:57:35.893385 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-20 03:57:35.893395 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-20 03:57:35.893468 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-20 03:57:35.893486 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-20 03:57:35.893505 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-20 03:57:35.893524 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-20 03:57:35.893543 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-20 03:57:35.893563 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-02-20 03:57:35.902067 | orchestrator | + set -e 2026-02-20 03:57:35.902166 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-20 03:57:35.902187 | orchestrator | ++ export INTERACTIVE=false 2026-02-20 03:57:35.902207 | orchestrator | ++ INTERACTIVE=false 2026-02-20 03:57:35.902226 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-20 03:57:35.902245 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-20 03:57:35.902265 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-20 03:57:35.903034 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-20 03:57:35.909093 | orchestrator | 2026-02-20 03:57:35.909180 | orchestrator | # Ceph status 2026-02-20 03:57:35.909202 | orchestrator | 2026-02-20 03:57:35.909220 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-20 03:57:35.909239 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-20 03:57:35.909259 | orchestrator | + echo 2026-02-20 03:57:35.909278 | orchestrator | + echo '# Ceph status' 2026-02-20 03:57:35.909332 | orchestrator | + echo 2026-02-20 03:57:35.909351 | orchestrator | + ceph -s 2026-02-20 03:57:36.466449 | orchestrator | cluster: 2026-02-20 03:57:36.466573 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-02-20 03:57:36.466599 | orchestrator | health: HEALTH_OK 2026-02-20 03:57:36.466619 | orchestrator | 2026-02-20 03:57:36.466639 | orchestrator | services: 2026-02-20 03:57:36.466659 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 67m) 2026-02-20 03:57:36.466681 | orchestrator | mgr: testbed-node-1(active, since 55m), standbys: testbed-node-2, testbed-node-0 2026-02-20 03:57:36.466698 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-02-20 03:57:36.466710 | orchestrator | osd: 6 osds: 6 up (since 63m), 6 in (since 64m) 2026-02-20 03:57:36.466721 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-02-20 03:57:36.466733 | orchestrator | 2026-02-20 03:57:36.466744 | orchestrator | data: 2026-02-20 03:57:36.466755 | orchestrator | volumes: 1/1 healthy 2026-02-20 03:57:36.466766 | orchestrator | pools: 14 pools, 401 pgs 2026-02-20 03:57:36.466778 | orchestrator | objects: 556 objects, 2.2 GiB 2026-02-20 03:57:36.466789 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-02-20 03:57:36.466800 | orchestrator | pgs: 401 active+clean 2026-02-20 03:57:36.466811 | orchestrator | 2026-02-20 03:57:36.507829 | orchestrator | 2026-02-20 03:57:36.507915 | orchestrator | # Ceph versions 2026-02-20 03:57:36.507926 | orchestrator | 2026-02-20 03:57:36.507936 | orchestrator | + echo 2026-02-20 03:57:36.507946 | orchestrator | + echo '# Ceph versions' 2026-02-20 03:57:36.507956 | orchestrator | + echo 2026-02-20 03:57:36.507984 | orchestrator | + ceph versions 2026-02-20 03:57:37.073250 | orchestrator | { 2026-02-20 03:57:37.073360 | orchestrator | "mon": { 2026-02-20 03:57:37.073373 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-20 03:57:37.073383 | orchestrator | }, 2026-02-20 03:57:37.073392 | orchestrator | "mgr": { 2026-02-20 03:57:37.073401 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-20 03:57:37.073409 | orchestrator | }, 2026-02-20 03:57:37.073418 | orchestrator | "osd": { 2026-02-20 03:57:37.073426 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-02-20 03:57:37.073434 | orchestrator | }, 2026-02-20 03:57:37.073444 | orchestrator | "mds": { 2026-02-20 03:57:37.073458 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-20 03:57:37.073471 | orchestrator | }, 2026-02-20 03:57:37.073484 | orchestrator | "rgw": { 2026-02-20 03:57:37.073497 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-20 03:57:37.073510 | orchestrator | }, 2026-02-20 03:57:37.073524 | orchestrator | "overall": { 2026-02-20 03:57:37.073537 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-02-20 03:57:37.073551 | orchestrator | } 2026-02-20 03:57:37.073565 | orchestrator | } 2026-02-20 03:57:37.114492 | orchestrator | 2026-02-20 03:57:37.114571 | orchestrator | # Ceph OSD tree 2026-02-20 03:57:37.114581 | orchestrator | 2026-02-20 03:57:37.114589 | orchestrator | + echo 2026-02-20 03:57:37.114598 | orchestrator | + echo '# Ceph OSD tree' 2026-02-20 03:57:37.114606 | orchestrator | + echo 2026-02-20 03:57:37.114614 | orchestrator | + ceph osd df tree 2026-02-20 03:57:37.592692 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-02-20 03:57:37.592804 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 402 MiB 113 GiB 5.89 1.00 - root default 2026-02-20 03:57:37.592824 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-02-20 03:57:37.592837 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 62 MiB 19 GiB 6.77 1.15 201 up osd.0 2026-02-20 03:57:37.592849 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1016 MiB 955 MiB 1 KiB 62 MiB 19 GiB 4.97 0.84 189 up osd.5 2026-02-20 03:57:37.592861 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-4 2026-02-20 03:57:37.592873 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 66 MiB 19 GiB 6.63 1.13 190 up osd.1 2026-02-20 03:57:37.592916 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.0 GiB 987 MiB 1 KiB 74 MiB 19 GiB 5.18 0.88 202 up osd.4 2026-02-20 03:57:37.592930 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-02-20 03:57:37.592943 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.5 GiB 1 KiB 74 MiB 18 GiB 7.73 1.31 188 up osd.2 2026-02-20 03:57:37.592955 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 836 MiB 771 MiB 1 KiB 66 MiB 19 GiB 4.09 0.69 200 up osd.3 2026-02-20 03:57:37.593058 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 402 MiB 113 GiB 5.89 2026-02-20 03:57:37.593071 | orchestrator | MIN/MAX VAR: 0.69/1.31 STDDEV: 1.24 2026-02-20 03:57:37.633578 | orchestrator | 2026-02-20 03:57:37.633669 | orchestrator | # Ceph monitor status 2026-02-20 03:57:37.633684 | orchestrator | 2026-02-20 03:57:37.633696 | orchestrator | + echo 2026-02-20 03:57:37.633708 | orchestrator | + echo '# Ceph monitor status' 2026-02-20 03:57:37.633720 | orchestrator | + echo 2026-02-20 03:57:37.633731 | orchestrator | + ceph mon stat 2026-02-20 03:57:38.220231 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-02-20 03:57:38.270681 | orchestrator | 2026-02-20 03:57:38.270796 | orchestrator | # Ceph quorum status 2026-02-20 03:57:38.270820 | orchestrator | 2026-02-20 03:57:38.270840 | orchestrator | + echo 2026-02-20 03:57:38.270858 | orchestrator | + echo '# Ceph quorum status' 2026-02-20 03:57:38.270877 | orchestrator | + echo 2026-02-20 03:57:38.271992 | orchestrator | + ceph quorum_status 2026-02-20 03:57:38.272082 | orchestrator | + jq 2026-02-20 03:57:38.874391 | orchestrator | { 2026-02-20 03:57:38.874493 | orchestrator | "election_epoch": 8, 2026-02-20 03:57:38.874509 | orchestrator | "quorum": [ 2026-02-20 03:57:38.874521 | orchestrator | 0, 2026-02-20 03:57:38.874531 | orchestrator | 1, 2026-02-20 03:57:38.874548 | orchestrator | 2 2026-02-20 03:57:38.874564 | orchestrator | ], 2026-02-20 03:57:38.874580 | orchestrator | "quorum_names": [ 2026-02-20 03:57:38.874596 | orchestrator | "testbed-node-0", 2026-02-20 03:57:38.874795 | orchestrator | "testbed-node-1", 2026-02-20 03:57:38.874833 | orchestrator | "testbed-node-2" 2026-02-20 03:57:38.874849 | orchestrator | ], 2026-02-20 03:57:38.874865 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-02-20 03:57:38.874882 | orchestrator | "quorum_age": 4059, 2026-02-20 03:57:38.874898 | orchestrator | "features": { 2026-02-20 03:57:38.874915 | orchestrator | "quorum_con": "4540138322906710015", 2026-02-20 03:57:38.874932 | orchestrator | "quorum_mon": [ 2026-02-20 03:57:38.874948 | orchestrator | "kraken", 2026-02-20 03:57:38.875014 | orchestrator | "luminous", 2026-02-20 03:57:38.875030 | orchestrator | "mimic", 2026-02-20 03:57:38.875046 | orchestrator | "osdmap-prune", 2026-02-20 03:57:38.875063 | orchestrator | "nautilus", 2026-02-20 03:57:38.875078 | orchestrator | "octopus", 2026-02-20 03:57:38.875094 | orchestrator | "pacific", 2026-02-20 03:57:38.875112 | orchestrator | "elector-pinging", 2026-02-20 03:57:38.875128 | orchestrator | "quincy", 2026-02-20 03:57:38.875144 | orchestrator | "reef" 2026-02-20 03:57:38.875157 | orchestrator | ] 2026-02-20 03:57:38.875167 | orchestrator | }, 2026-02-20 03:57:38.875177 | orchestrator | "monmap": { 2026-02-20 03:57:38.875186 | orchestrator | "epoch": 1, 2026-02-20 03:57:38.875198 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-02-20 03:57:38.875217 | orchestrator | "modified": "2026-02-20T02:49:42.920134Z", 2026-02-20 03:57:38.875233 | orchestrator | "created": "2026-02-20T02:49:42.920134Z", 2026-02-20 03:57:38.875249 | orchestrator | "min_mon_release": 18, 2026-02-20 03:57:38.875266 | orchestrator | "min_mon_release_name": "reef", 2026-02-20 03:57:38.875281 | orchestrator | "election_strategy": 1, 2026-02-20 03:57:38.875296 | orchestrator | "disallowed_leaders: ": "", 2026-02-20 03:57:38.875312 | orchestrator | "stretch_mode": false, 2026-02-20 03:57:38.875327 | orchestrator | "tiebreaker_mon": "", 2026-02-20 03:57:38.875344 | orchestrator | "removed_ranks: ": "", 2026-02-20 03:57:38.875360 | orchestrator | "features": { 2026-02-20 03:57:38.875377 | orchestrator | "persistent": [ 2026-02-20 03:57:38.875394 | orchestrator | "kraken", 2026-02-20 03:57:38.875442 | orchestrator | "luminous", 2026-02-20 03:57:38.875454 | orchestrator | "mimic", 2026-02-20 03:57:38.875465 | orchestrator | "osdmap-prune", 2026-02-20 03:57:38.875477 | orchestrator | "nautilus", 2026-02-20 03:57:38.875488 | orchestrator | "octopus", 2026-02-20 03:57:38.875499 | orchestrator | "pacific", 2026-02-20 03:57:38.875511 | orchestrator | "elector-pinging", 2026-02-20 03:57:38.875528 | orchestrator | "quincy", 2026-02-20 03:57:38.875545 | orchestrator | "reef" 2026-02-20 03:57:38.875562 | orchestrator | ], 2026-02-20 03:57:38.875579 | orchestrator | "optional": [] 2026-02-20 03:57:38.875596 | orchestrator | }, 2026-02-20 03:57:38.875612 | orchestrator | "mons": [ 2026-02-20 03:57:38.875649 | orchestrator | { 2026-02-20 03:57:38.875667 | orchestrator | "rank": 0, 2026-02-20 03:57:38.875683 | orchestrator | "name": "testbed-node-0", 2026-02-20 03:57:38.875700 | orchestrator | "public_addrs": { 2026-02-20 03:57:38.875712 | orchestrator | "addrvec": [ 2026-02-20 03:57:38.875722 | orchestrator | { 2026-02-20 03:57:38.875732 | orchestrator | "type": "v2", 2026-02-20 03:57:38.875743 | orchestrator | "addr": "192.168.16.8:3300", 2026-02-20 03:57:38.875753 | orchestrator | "nonce": 0 2026-02-20 03:57:38.875762 | orchestrator | }, 2026-02-20 03:57:38.875772 | orchestrator | { 2026-02-20 03:57:38.875782 | orchestrator | "type": "v1", 2026-02-20 03:57:38.875792 | orchestrator | "addr": "192.168.16.8:6789", 2026-02-20 03:57:38.875802 | orchestrator | "nonce": 0 2026-02-20 03:57:38.875811 | orchestrator | } 2026-02-20 03:57:38.875821 | orchestrator | ] 2026-02-20 03:57:38.875831 | orchestrator | }, 2026-02-20 03:57:38.875840 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-02-20 03:57:38.875850 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-02-20 03:57:38.875860 | orchestrator | "priority": 0, 2026-02-20 03:57:38.875869 | orchestrator | "weight": 0, 2026-02-20 03:57:38.875879 | orchestrator | "crush_location": "{}" 2026-02-20 03:57:38.875889 | orchestrator | }, 2026-02-20 03:57:38.875898 | orchestrator | { 2026-02-20 03:57:38.875908 | orchestrator | "rank": 1, 2026-02-20 03:57:38.875917 | orchestrator | "name": "testbed-node-1", 2026-02-20 03:57:38.875927 | orchestrator | "public_addrs": { 2026-02-20 03:57:38.875937 | orchestrator | "addrvec": [ 2026-02-20 03:57:38.875946 | orchestrator | { 2026-02-20 03:57:38.875987 | orchestrator | "type": "v2", 2026-02-20 03:57:38.875997 | orchestrator | "addr": "192.168.16.11:3300", 2026-02-20 03:57:38.876007 | orchestrator | "nonce": 0 2026-02-20 03:57:38.876016 | orchestrator | }, 2026-02-20 03:57:38.876026 | orchestrator | { 2026-02-20 03:57:38.876036 | orchestrator | "type": "v1", 2026-02-20 03:57:38.876045 | orchestrator | "addr": "192.168.16.11:6789", 2026-02-20 03:57:38.876055 | orchestrator | "nonce": 0 2026-02-20 03:57:38.876065 | orchestrator | } 2026-02-20 03:57:38.876074 | orchestrator | ] 2026-02-20 03:57:38.876084 | orchestrator | }, 2026-02-20 03:57:38.876094 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-02-20 03:57:38.876103 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-02-20 03:57:38.876113 | orchestrator | "priority": 0, 2026-02-20 03:57:38.876122 | orchestrator | "weight": 0, 2026-02-20 03:57:38.876132 | orchestrator | "crush_location": "{}" 2026-02-20 03:57:38.876142 | orchestrator | }, 2026-02-20 03:57:38.876151 | orchestrator | { 2026-02-20 03:57:38.876161 | orchestrator | "rank": 2, 2026-02-20 03:57:38.876170 | orchestrator | "name": "testbed-node-2", 2026-02-20 03:57:38.876180 | orchestrator | "public_addrs": { 2026-02-20 03:57:38.876189 | orchestrator | "addrvec": [ 2026-02-20 03:57:38.876199 | orchestrator | { 2026-02-20 03:57:38.876209 | orchestrator | "type": "v2", 2026-02-20 03:57:38.876218 | orchestrator | "addr": "192.168.16.12:3300", 2026-02-20 03:57:38.876228 | orchestrator | "nonce": 0 2026-02-20 03:57:38.876237 | orchestrator | }, 2026-02-20 03:57:38.876247 | orchestrator | { 2026-02-20 03:57:38.876257 | orchestrator | "type": "v1", 2026-02-20 03:57:38.876266 | orchestrator | "addr": "192.168.16.12:6789", 2026-02-20 03:57:38.876276 | orchestrator | "nonce": 0 2026-02-20 03:57:38.876286 | orchestrator | } 2026-02-20 03:57:38.876295 | orchestrator | ] 2026-02-20 03:57:38.876305 | orchestrator | }, 2026-02-20 03:57:38.876315 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-02-20 03:57:38.876325 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-02-20 03:57:38.876334 | orchestrator | "priority": 0, 2026-02-20 03:57:38.876361 | orchestrator | "weight": 0, 2026-02-20 03:57:38.876380 | orchestrator | "crush_location": "{}" 2026-02-20 03:57:38.876394 | orchestrator | } 2026-02-20 03:57:38.876404 | orchestrator | ] 2026-02-20 03:57:38.876414 | orchestrator | } 2026-02-20 03:57:38.876423 | orchestrator | } 2026-02-20 03:57:38.876450 | orchestrator | 2026-02-20 03:57:38.876461 | orchestrator | # Ceph free space status 2026-02-20 03:57:38.876471 | orchestrator | 2026-02-20 03:57:38.876481 | orchestrator | + echo 2026-02-20 03:57:38.876490 | orchestrator | + echo '# Ceph free space status' 2026-02-20 03:57:38.876500 | orchestrator | + echo 2026-02-20 03:57:38.876510 | orchestrator | + ceph df 2026-02-20 03:57:39.446985 | orchestrator | --- RAW STORAGE --- 2026-02-20 03:57:39.447073 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-02-20 03:57:39.447090 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.89 2026-02-20 03:57:39.447105 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.89 2026-02-20 03:57:39.447110 | orchestrator | 2026-02-20 03:57:39.447116 | orchestrator | --- POOLS --- 2026-02-20 03:57:39.447121 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-02-20 03:57:39.447127 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-02-20 03:57:39.447132 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-02-20 03:57:39.447137 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-02-20 03:57:39.447142 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-02-20 03:57:39.447146 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-02-20 03:57:39.447151 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-02-20 03:57:39.447156 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-02-20 03:57:39.447161 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-02-20 03:57:39.447165 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-02-20 03:57:39.447170 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-02-20 03:57:39.447174 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-02-20 03:57:39.447179 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.98 35 GiB 2026-02-20 03:57:39.447184 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-02-20 03:57:39.447188 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-02-20 03:57:39.489696 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-20 03:57:39.544495 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-20 03:57:39.544586 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-02-20 03:57:39.544601 | orchestrator | + osism apply facts 2026-02-20 03:57:51.605596 | orchestrator | 2026-02-20 03:57:51 | INFO  | Task 00695791-68e5-47cb-9a4a-b9c96beca88d (facts) was prepared for execution. 2026-02-20 03:57:51.605707 | orchestrator | 2026-02-20 03:57:51 | INFO  | It takes a moment until task 00695791-68e5-47cb-9a4a-b9c96beca88d (facts) has been started and output is visible here. 2026-02-20 03:58:04.168238 | orchestrator | 2026-02-20 03:58:04.168350 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-20 03:58:04.168366 | orchestrator | 2026-02-20 03:58:04.168377 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-20 03:58:04.168387 | orchestrator | Friday 20 February 2026 03:57:55 +0000 (0:00:00.259) 0:00:00.259 ******* 2026-02-20 03:58:04.168397 | orchestrator | ok: [testbed-manager] 2026-02-20 03:58:04.168408 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:58:04.168417 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:58:04.168427 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:58:04.168436 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:58:04.168446 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:58:04.168455 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:58:04.168465 | orchestrator | 2026-02-20 03:58:04.168475 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-20 03:58:04.168509 | orchestrator | Friday 20 February 2026 03:57:56 +0000 (0:00:01.002) 0:00:01.262 ******* 2026-02-20 03:58:04.168520 | orchestrator | skipping: [testbed-manager] 2026-02-20 03:58:04.168530 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:58:04.168539 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:58:04.168549 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:58:04.168559 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:58:04.168568 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:58:04.168578 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:58:04.168587 | orchestrator | 2026-02-20 03:58:04.168597 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-20 03:58:04.168607 | orchestrator | 2026-02-20 03:58:04.168616 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-20 03:58:04.168626 | orchestrator | Friday 20 February 2026 03:57:57 +0000 (0:00:00.980) 0:00:02.242 ******* 2026-02-20 03:58:04.168635 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:58:04.168645 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:58:04.168654 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:58:04.168664 | orchestrator | ok: [testbed-manager] 2026-02-20 03:58:04.168673 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:58:04.168683 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:58:04.168692 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:58:04.168702 | orchestrator | 2026-02-20 03:58:04.168711 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-20 03:58:04.168721 | orchestrator | 2026-02-20 03:58:04.168731 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-20 03:58:04.168741 | orchestrator | Friday 20 February 2026 03:58:03 +0000 (0:00:05.470) 0:00:07.713 ******* 2026-02-20 03:58:04.168750 | orchestrator | skipping: [testbed-manager] 2026-02-20 03:58:04.168760 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:58:04.168770 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:58:04.168779 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:58:04.168791 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:58:04.168803 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:58:04.168814 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:58:04.168824 | orchestrator | 2026-02-20 03:58:04.168835 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:58:04.168847 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 03:58:04.168859 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 03:58:04.168871 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 03:58:04.168896 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 03:58:04.168906 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 03:58:04.168916 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 03:58:04.168926 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 03:58:04.168967 | orchestrator | 2026-02-20 03:58:04.168977 | orchestrator | 2026-02-20 03:58:04.168986 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:58:04.168996 | orchestrator | Friday 20 February 2026 03:58:03 +0000 (0:00:00.536) 0:00:08.249 ******* 2026-02-20 03:58:04.169005 | orchestrator | =============================================================================== 2026-02-20 03:58:04.169015 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.47s 2026-02-20 03:58:04.169033 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.00s 2026-02-20 03:58:04.169043 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 0.98s 2026-02-20 03:58:04.169052 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-02-20 03:58:04.429736 | orchestrator | + osism validate ceph-mons 2026-02-20 03:58:35.856730 | orchestrator | 2026-02-20 03:58:35.856857 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-02-20 03:58:35.856876 | orchestrator | 2026-02-20 03:58:35.856891 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-20 03:58:35.856988 | orchestrator | Friday 20 February 2026 03:58:20 +0000 (0:00:00.438) 0:00:00.438 ******* 2026-02-20 03:58:35.857003 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-20 03:58:35.857017 | orchestrator | 2026-02-20 03:58:35.857031 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-20 03:58:35.857046 | orchestrator | Friday 20 February 2026 03:58:21 +0000 (0:00:00.773) 0:00:01.212 ******* 2026-02-20 03:58:35.857058 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-20 03:58:35.857072 | orchestrator | 2026-02-20 03:58:35.857087 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-20 03:58:35.857100 | orchestrator | Friday 20 February 2026 03:58:22 +0000 (0:00:00.931) 0:00:02.143 ******* 2026-02-20 03:58:35.857115 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:58:35.857130 | orchestrator | 2026-02-20 03:58:35.857144 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-20 03:58:35.857158 | orchestrator | Friday 20 February 2026 03:58:22 +0000 (0:00:00.127) 0:00:02.270 ******* 2026-02-20 03:58:35.857172 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:58:35.857185 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:58:35.857198 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:58:35.857209 | orchestrator | 2026-02-20 03:58:35.857222 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-20 03:58:35.857234 | orchestrator | Friday 20 February 2026 03:58:23 +0000 (0:00:00.276) 0:00:02.547 ******* 2026-02-20 03:58:35.857247 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:58:35.857258 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:58:35.857270 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:58:35.857283 | orchestrator | 2026-02-20 03:58:35.857295 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-20 03:58:35.857307 | orchestrator | Friday 20 February 2026 03:58:24 +0000 (0:00:01.014) 0:00:03.561 ******* 2026-02-20 03:58:35.857319 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:58:35.857332 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:58:35.857343 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:58:35.857355 | orchestrator | 2026-02-20 03:58:35.857368 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-20 03:58:35.857380 | orchestrator | Friday 20 February 2026 03:58:24 +0000 (0:00:00.296) 0:00:03.858 ******* 2026-02-20 03:58:35.857392 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:58:35.857405 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:58:35.857417 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:58:35.857428 | orchestrator | 2026-02-20 03:58:35.857441 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-20 03:58:35.857453 | orchestrator | Friday 20 February 2026 03:58:24 +0000 (0:00:00.471) 0:00:04.329 ******* 2026-02-20 03:58:35.857465 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:58:35.857478 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:58:35.857490 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:58:35.857503 | orchestrator | 2026-02-20 03:58:35.857515 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-02-20 03:58:35.857528 | orchestrator | Friday 20 February 2026 03:58:25 +0000 (0:00:00.297) 0:00:04.626 ******* 2026-02-20 03:58:35.857539 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:58:35.857578 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:58:35.857588 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:58:35.857600 | orchestrator | 2026-02-20 03:58:35.857611 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-02-20 03:58:35.857623 | orchestrator | Friday 20 February 2026 03:58:25 +0000 (0:00:00.293) 0:00:04.920 ******* 2026-02-20 03:58:35.857635 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:58:35.857646 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:58:35.857658 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:58:35.857669 | orchestrator | 2026-02-20 03:58:35.857682 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-20 03:58:35.857694 | orchestrator | Friday 20 February 2026 03:58:25 +0000 (0:00:00.458) 0:00:05.379 ******* 2026-02-20 03:58:35.857705 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:58:35.857716 | orchestrator | 2026-02-20 03:58:35.857728 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-20 03:58:35.857740 | orchestrator | Friday 20 February 2026 03:58:26 +0000 (0:00:00.247) 0:00:05.626 ******* 2026-02-20 03:58:35.857752 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:58:35.857764 | orchestrator | 2026-02-20 03:58:35.857775 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-20 03:58:35.857787 | orchestrator | Friday 20 February 2026 03:58:26 +0000 (0:00:00.252) 0:00:05.878 ******* 2026-02-20 03:58:35.857799 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:58:35.857811 | orchestrator | 2026-02-20 03:58:35.857824 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:58:35.857835 | orchestrator | Friday 20 February 2026 03:58:26 +0000 (0:00:00.241) 0:00:06.119 ******* 2026-02-20 03:58:35.857847 | orchestrator | 2026-02-20 03:58:35.857859 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:58:35.857871 | orchestrator | Friday 20 February 2026 03:58:26 +0000 (0:00:00.068) 0:00:06.188 ******* 2026-02-20 03:58:35.857882 | orchestrator | 2026-02-20 03:58:35.857910 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:58:35.857923 | orchestrator | Friday 20 February 2026 03:58:26 +0000 (0:00:00.069) 0:00:06.258 ******* 2026-02-20 03:58:35.857935 | orchestrator | 2026-02-20 03:58:35.857947 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-20 03:58:35.857958 | orchestrator | Friday 20 February 2026 03:58:26 +0000 (0:00:00.073) 0:00:06.331 ******* 2026-02-20 03:58:35.857969 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:58:35.857980 | orchestrator | 2026-02-20 03:58:35.857992 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-20 03:58:35.858075 | orchestrator | Friday 20 February 2026 03:58:27 +0000 (0:00:00.235) 0:00:06.567 ******* 2026-02-20 03:58:35.858093 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:58:35.858104 | orchestrator | 2026-02-20 03:58:35.858135 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-02-20 03:58:35.858147 | orchestrator | Friday 20 February 2026 03:58:27 +0000 (0:00:00.234) 0:00:06.802 ******* 2026-02-20 03:58:35.858159 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:58:35.858170 | orchestrator | 2026-02-20 03:58:35.858181 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-02-20 03:58:35.858192 | orchestrator | Friday 20 February 2026 03:58:27 +0000 (0:00:00.104) 0:00:06.906 ******* 2026-02-20 03:58:35.858203 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:58:35.858218 | orchestrator | 2026-02-20 03:58:35.858229 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-02-20 03:58:35.858239 | orchestrator | Friday 20 February 2026 03:58:28 +0000 (0:00:01.533) 0:00:08.440 ******* 2026-02-20 03:58:35.858250 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:58:35.858260 | orchestrator | 2026-02-20 03:58:35.858269 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-02-20 03:58:35.858280 | orchestrator | Friday 20 February 2026 03:58:29 +0000 (0:00:00.517) 0:00:08.957 ******* 2026-02-20 03:58:35.858302 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:58:35.858314 | orchestrator | 2026-02-20 03:58:35.858324 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-02-20 03:58:35.858335 | orchestrator | Friday 20 February 2026 03:58:29 +0000 (0:00:00.134) 0:00:09.092 ******* 2026-02-20 03:58:35.858347 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:58:35.858358 | orchestrator | 2026-02-20 03:58:35.858369 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-02-20 03:58:35.858380 | orchestrator | Friday 20 February 2026 03:58:29 +0000 (0:00:00.315) 0:00:09.408 ******* 2026-02-20 03:58:35.858391 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:58:35.858402 | orchestrator | 2026-02-20 03:58:35.858414 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-02-20 03:58:35.858425 | orchestrator | Friday 20 February 2026 03:58:30 +0000 (0:00:00.304) 0:00:09.712 ******* 2026-02-20 03:58:35.858436 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:58:35.858447 | orchestrator | 2026-02-20 03:58:35.858459 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-02-20 03:58:35.858471 | orchestrator | Friday 20 February 2026 03:58:30 +0000 (0:00:00.110) 0:00:09.822 ******* 2026-02-20 03:58:35.858482 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:58:35.858493 | orchestrator | 2026-02-20 03:58:35.858506 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-02-20 03:58:35.858518 | orchestrator | Friday 20 February 2026 03:58:30 +0000 (0:00:00.118) 0:00:09.941 ******* 2026-02-20 03:58:35.858531 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:58:35.858542 | orchestrator | 2026-02-20 03:58:35.858555 | orchestrator | TASK [Gather status data] ****************************************************** 2026-02-20 03:58:35.858568 | orchestrator | Friday 20 February 2026 03:58:30 +0000 (0:00:00.128) 0:00:10.070 ******* 2026-02-20 03:58:35.858581 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:58:35.858594 | orchestrator | 2026-02-20 03:58:35.858607 | orchestrator | TASK [Set health test data] **************************************************** 2026-02-20 03:58:35.858619 | orchestrator | Friday 20 February 2026 03:58:31 +0000 (0:00:01.259) 0:00:11.329 ******* 2026-02-20 03:58:35.858630 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:58:35.858643 | orchestrator | 2026-02-20 03:58:35.858655 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-02-20 03:58:35.858668 | orchestrator | Friday 20 February 2026 03:58:32 +0000 (0:00:00.289) 0:00:11.619 ******* 2026-02-20 03:58:35.858681 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:58:35.858693 | orchestrator | 2026-02-20 03:58:35.858706 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-02-20 03:58:35.858719 | orchestrator | Friday 20 February 2026 03:58:32 +0000 (0:00:00.146) 0:00:11.766 ******* 2026-02-20 03:58:35.858731 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:58:35.858744 | orchestrator | 2026-02-20 03:58:35.858757 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-02-20 03:58:35.858769 | orchestrator | Friday 20 February 2026 03:58:32 +0000 (0:00:00.145) 0:00:11.912 ******* 2026-02-20 03:58:35.858782 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:58:35.858795 | orchestrator | 2026-02-20 03:58:35.858808 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-02-20 03:58:35.858821 | orchestrator | Friday 20 February 2026 03:58:32 +0000 (0:00:00.135) 0:00:12.047 ******* 2026-02-20 03:58:35.858843 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:58:35.858857 | orchestrator | 2026-02-20 03:58:35.858870 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-20 03:58:35.858882 | orchestrator | Friday 20 February 2026 03:58:32 +0000 (0:00:00.287) 0:00:12.335 ******* 2026-02-20 03:58:35.858916 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-20 03:58:35.858930 | orchestrator | 2026-02-20 03:58:35.858942 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-20 03:58:35.858954 | orchestrator | Friday 20 February 2026 03:58:33 +0000 (0:00:00.270) 0:00:12.605 ******* 2026-02-20 03:58:35.858976 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:58:35.858989 | orchestrator | 2026-02-20 03:58:35.859001 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-20 03:58:35.859013 | orchestrator | Friday 20 February 2026 03:58:33 +0000 (0:00:00.253) 0:00:12.859 ******* 2026-02-20 03:58:35.859026 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-20 03:58:35.859038 | orchestrator | 2026-02-20 03:58:35.859050 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-20 03:58:35.859063 | orchestrator | Friday 20 February 2026 03:58:35 +0000 (0:00:01.681) 0:00:14.540 ******* 2026-02-20 03:58:35.859075 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-20 03:58:35.859087 | orchestrator | 2026-02-20 03:58:35.859100 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-20 03:58:35.859112 | orchestrator | Friday 20 February 2026 03:58:35 +0000 (0:00:00.271) 0:00:14.811 ******* 2026-02-20 03:58:35.859124 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-20 03:58:35.859136 | orchestrator | 2026-02-20 03:58:35.859163 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:58:38.342118 | orchestrator | Friday 20 February 2026 03:58:35 +0000 (0:00:00.259) 0:00:15.071 ******* 2026-02-20 03:58:38.342225 | orchestrator | 2026-02-20 03:58:38.342243 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:58:38.342255 | orchestrator | Friday 20 February 2026 03:58:35 +0000 (0:00:00.069) 0:00:15.140 ******* 2026-02-20 03:58:38.342266 | orchestrator | 2026-02-20 03:58:38.342279 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:58:38.342290 | orchestrator | Friday 20 February 2026 03:58:35 +0000 (0:00:00.070) 0:00:15.210 ******* 2026-02-20 03:58:38.342301 | orchestrator | 2026-02-20 03:58:38.342312 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-20 03:58:38.342323 | orchestrator | Friday 20 February 2026 03:58:35 +0000 (0:00:00.072) 0:00:15.283 ******* 2026-02-20 03:58:38.342335 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-20 03:58:38.342345 | orchestrator | 2026-02-20 03:58:38.342357 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-20 03:58:38.342368 | orchestrator | Friday 20 February 2026 03:58:37 +0000 (0:00:01.428) 0:00:16.711 ******* 2026-02-20 03:58:38.342379 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-20 03:58:38.342390 | orchestrator |  "msg": [ 2026-02-20 03:58:38.342403 | orchestrator |  "Validator run completed.", 2026-02-20 03:58:38.342414 | orchestrator |  "You can find the report file here:", 2026-02-20 03:58:38.342425 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-02-20T03:58:21+00:00-report.json", 2026-02-20 03:58:38.342437 | orchestrator |  "on the following host:", 2026-02-20 03:58:38.342448 | orchestrator |  "testbed-manager" 2026-02-20 03:58:38.342459 | orchestrator |  ] 2026-02-20 03:58:38.342471 | orchestrator | } 2026-02-20 03:58:38.342482 | orchestrator | 2026-02-20 03:58:38.342493 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:58:38.342505 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-20 03:58:38.342517 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 03:58:38.342529 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 03:58:38.342540 | orchestrator | 2026-02-20 03:58:38.342551 | orchestrator | 2026-02-20 03:58:38.342562 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:58:38.342573 | orchestrator | Friday 20 February 2026 03:58:38 +0000 (0:00:00.770) 0:00:17.481 ******* 2026-02-20 03:58:38.342610 | orchestrator | =============================================================================== 2026-02-20 03:58:38.342623 | orchestrator | Aggregate test results step one ----------------------------------------- 1.68s 2026-02-20 03:58:38.342636 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.53s 2026-02-20 03:58:38.342648 | orchestrator | Write report file ------------------------------------------------------- 1.43s 2026-02-20 03:58:38.342661 | orchestrator | Gather status data ------------------------------------------------------ 1.26s 2026-02-20 03:58:38.342673 | orchestrator | Get container info ------------------------------------------------------ 1.01s 2026-02-20 03:58:38.342686 | orchestrator | Create report output directory ------------------------------------------ 0.93s 2026-02-20 03:58:38.342699 | orchestrator | Get timestamp for report file ------------------------------------------- 0.77s 2026-02-20 03:58:38.342711 | orchestrator | Print report file information ------------------------------------------- 0.77s 2026-02-20 03:58:38.342723 | orchestrator | Set quorum test data ---------------------------------------------------- 0.52s 2026-02-20 03:58:38.342736 | orchestrator | Set test result to passed if container is existing ---------------------- 0.47s 2026-02-20 03:58:38.342763 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.46s 2026-02-20 03:58:38.342775 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2026-02-20 03:58:38.342788 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.30s 2026-02-20 03:58:38.342800 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-02-20 03:58:38.342813 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-02-20 03:58:38.342825 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.29s 2026-02-20 03:58:38.342838 | orchestrator | Set health test data ---------------------------------------------------- 0.29s 2026-02-20 03:58:38.342851 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.29s 2026-02-20 03:58:38.342863 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2026-02-20 03:58:38.342875 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-02-20 03:58:38.626433 | orchestrator | + osism validate ceph-mgrs 2026-02-20 03:59:01.653074 | orchestrator | 2026-02-20 03:59:01.653188 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-02-20 03:59:01.653204 | orchestrator | 2026-02-20 03:59:01.653216 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-20 03:59:01.653227 | orchestrator | Friday 20 February 2026 03:58:47 +0000 (0:00:00.420) 0:00:00.420 ******* 2026-02-20 03:59:01.653237 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-20 03:59:01.653247 | orchestrator | 2026-02-20 03:59:01.653257 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-20 03:59:01.653267 | orchestrator | Friday 20 February 2026 03:58:48 +0000 (0:00:00.800) 0:00:01.221 ******* 2026-02-20 03:59:01.653277 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-20 03:59:01.653287 | orchestrator | 2026-02-20 03:59:01.653297 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-20 03:59:01.653307 | orchestrator | Friday 20 February 2026 03:58:49 +0000 (0:00:00.895) 0:00:02.117 ******* 2026-02-20 03:59:01.653317 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:59:01.653327 | orchestrator | 2026-02-20 03:59:01.653337 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-20 03:59:01.653347 | orchestrator | Friday 20 February 2026 03:58:49 +0000 (0:00:00.127) 0:00:02.244 ******* 2026-02-20 03:59:01.653357 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:59:01.653367 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:59:01.653376 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:59:01.653386 | orchestrator | 2026-02-20 03:59:01.653396 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-20 03:59:01.653406 | orchestrator | Friday 20 February 2026 03:58:49 +0000 (0:00:00.317) 0:00:02.562 ******* 2026-02-20 03:59:01.653436 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:59:01.653446 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:59:01.653456 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:59:01.653466 | orchestrator | 2026-02-20 03:59:01.653476 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-20 03:59:01.653485 | orchestrator | Friday 20 February 2026 03:58:51 +0000 (0:00:01.106) 0:00:03.669 ******* 2026-02-20 03:59:01.653495 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:59:01.653505 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:59:01.653515 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:59:01.653525 | orchestrator | 2026-02-20 03:59:01.653534 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-20 03:59:01.653544 | orchestrator | Friday 20 February 2026 03:58:51 +0000 (0:00:00.276) 0:00:03.945 ******* 2026-02-20 03:59:01.653555 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:59:01.653564 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:59:01.653574 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:59:01.653584 | orchestrator | 2026-02-20 03:59:01.653593 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-20 03:59:01.653603 | orchestrator | Friday 20 February 2026 03:58:51 +0000 (0:00:00.444) 0:00:04.389 ******* 2026-02-20 03:59:01.653614 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:59:01.653626 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:59:01.653637 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:59:01.653648 | orchestrator | 2026-02-20 03:59:01.653660 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-02-20 03:59:01.653672 | orchestrator | Friday 20 February 2026 03:58:52 +0000 (0:00:00.296) 0:00:04.686 ******* 2026-02-20 03:59:01.653683 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:59:01.653695 | orchestrator | skipping: [testbed-node-1] 2026-02-20 03:59:01.653706 | orchestrator | skipping: [testbed-node-2] 2026-02-20 03:59:01.653717 | orchestrator | 2026-02-20 03:59:01.653727 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-02-20 03:59:01.653737 | orchestrator | Friday 20 February 2026 03:58:52 +0000 (0:00:00.277) 0:00:04.963 ******* 2026-02-20 03:59:01.653746 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:59:01.653756 | orchestrator | ok: [testbed-node-1] 2026-02-20 03:59:01.653766 | orchestrator | ok: [testbed-node-2] 2026-02-20 03:59:01.653775 | orchestrator | 2026-02-20 03:59:01.653785 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-20 03:59:01.653795 | orchestrator | Friday 20 February 2026 03:58:52 +0000 (0:00:00.434) 0:00:05.398 ******* 2026-02-20 03:59:01.653804 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:59:01.653814 | orchestrator | 2026-02-20 03:59:01.653824 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-20 03:59:01.653833 | orchestrator | Friday 20 February 2026 03:58:53 +0000 (0:00:00.237) 0:00:05.635 ******* 2026-02-20 03:59:01.653843 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:59:01.653853 | orchestrator | 2026-02-20 03:59:01.653862 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-20 03:59:01.653903 | orchestrator | Friday 20 February 2026 03:58:53 +0000 (0:00:00.242) 0:00:05.877 ******* 2026-02-20 03:59:01.653921 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:59:01.653939 | orchestrator | 2026-02-20 03:59:01.653956 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:59:01.653973 | orchestrator | Friday 20 February 2026 03:58:53 +0000 (0:00:00.235) 0:00:06.113 ******* 2026-02-20 03:59:01.653988 | orchestrator | 2026-02-20 03:59:01.653998 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:59:01.654008 | orchestrator | Friday 20 February 2026 03:58:53 +0000 (0:00:00.068) 0:00:06.182 ******* 2026-02-20 03:59:01.654077 | orchestrator | 2026-02-20 03:59:01.654088 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:59:01.654099 | orchestrator | Friday 20 February 2026 03:58:53 +0000 (0:00:00.068) 0:00:06.251 ******* 2026-02-20 03:59:01.654129 | orchestrator | 2026-02-20 03:59:01.654145 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-20 03:59:01.654162 | orchestrator | Friday 20 February 2026 03:58:53 +0000 (0:00:00.071) 0:00:06.322 ******* 2026-02-20 03:59:01.654216 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:59:01.654235 | orchestrator | 2026-02-20 03:59:01.654250 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-20 03:59:01.654266 | orchestrator | Friday 20 February 2026 03:58:53 +0000 (0:00:00.244) 0:00:06.567 ******* 2026-02-20 03:59:01.654280 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:59:01.654294 | orchestrator | 2026-02-20 03:59:01.654332 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-02-20 03:59:01.654351 | orchestrator | Friday 20 February 2026 03:58:54 +0000 (0:00:00.254) 0:00:06.821 ******* 2026-02-20 03:59:01.654367 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:59:01.654383 | orchestrator | 2026-02-20 03:59:01.654400 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-02-20 03:59:01.654416 | orchestrator | Friday 20 February 2026 03:58:54 +0000 (0:00:00.098) 0:00:06.920 ******* 2026-02-20 03:59:01.654434 | orchestrator | changed: [testbed-node-0] 2026-02-20 03:59:01.654451 | orchestrator | 2026-02-20 03:59:01.654467 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-02-20 03:59:01.654484 | orchestrator | Friday 20 February 2026 03:58:56 +0000 (0:00:02.015) 0:00:08.935 ******* 2026-02-20 03:59:01.654500 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:59:01.654517 | orchestrator | 2026-02-20 03:59:01.654557 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-02-20 03:59:01.654575 | orchestrator | Friday 20 February 2026 03:58:56 +0000 (0:00:00.382) 0:00:09.318 ******* 2026-02-20 03:59:01.654592 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:59:01.654608 | orchestrator | 2026-02-20 03:59:01.654625 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-02-20 03:59:01.654642 | orchestrator | Friday 20 February 2026 03:58:57 +0000 (0:00:00.300) 0:00:09.618 ******* 2026-02-20 03:59:01.654660 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:59:01.654677 | orchestrator | 2026-02-20 03:59:01.654695 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-02-20 03:59:01.654710 | orchestrator | Friday 20 February 2026 03:58:57 +0000 (0:00:00.126) 0:00:09.745 ******* 2026-02-20 03:59:01.654720 | orchestrator | ok: [testbed-node-0] 2026-02-20 03:59:01.654730 | orchestrator | 2026-02-20 03:59:01.654739 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-20 03:59:01.654749 | orchestrator | Friday 20 February 2026 03:58:57 +0000 (0:00:00.137) 0:00:09.882 ******* 2026-02-20 03:59:01.654759 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-20 03:59:01.654769 | orchestrator | 2026-02-20 03:59:01.654778 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-20 03:59:01.654788 | orchestrator | Friday 20 February 2026 03:58:57 +0000 (0:00:00.241) 0:00:10.124 ******* 2026-02-20 03:59:01.654801 | orchestrator | skipping: [testbed-node-0] 2026-02-20 03:59:01.654818 | orchestrator | 2026-02-20 03:59:01.654834 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-20 03:59:01.654849 | orchestrator | Friday 20 February 2026 03:58:57 +0000 (0:00:00.237) 0:00:10.362 ******* 2026-02-20 03:59:01.654902 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-20 03:59:01.654921 | orchestrator | 2026-02-20 03:59:01.654936 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-20 03:59:01.654951 | orchestrator | Friday 20 February 2026 03:58:59 +0000 (0:00:01.281) 0:00:11.643 ******* 2026-02-20 03:59:01.654969 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-20 03:59:01.654985 | orchestrator | 2026-02-20 03:59:01.655003 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-20 03:59:01.655018 | orchestrator | Friday 20 February 2026 03:58:59 +0000 (0:00:00.250) 0:00:11.894 ******* 2026-02-20 03:59:01.655048 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-20 03:59:01.655058 | orchestrator | 2026-02-20 03:59:01.655068 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:59:01.655078 | orchestrator | Friday 20 February 2026 03:58:59 +0000 (0:00:00.259) 0:00:12.154 ******* 2026-02-20 03:59:01.655088 | orchestrator | 2026-02-20 03:59:01.655097 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:59:01.655107 | orchestrator | Friday 20 February 2026 03:58:59 +0000 (0:00:00.081) 0:00:12.235 ******* 2026-02-20 03:59:01.655117 | orchestrator | 2026-02-20 03:59:01.655127 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:59:01.655136 | orchestrator | Friday 20 February 2026 03:58:59 +0000 (0:00:00.070) 0:00:12.306 ******* 2026-02-20 03:59:01.655146 | orchestrator | 2026-02-20 03:59:01.655155 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-20 03:59:01.655165 | orchestrator | Friday 20 February 2026 03:58:59 +0000 (0:00:00.230) 0:00:12.537 ******* 2026-02-20 03:59:01.655175 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-20 03:59:01.655185 | orchestrator | 2026-02-20 03:59:01.655195 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-20 03:59:01.655204 | orchestrator | Friday 20 February 2026 03:59:01 +0000 (0:00:01.288) 0:00:13.826 ******* 2026-02-20 03:59:01.655214 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-20 03:59:01.655224 | orchestrator |  "msg": [ 2026-02-20 03:59:01.655234 | orchestrator |  "Validator run completed.", 2026-02-20 03:59:01.655250 | orchestrator |  "You can find the report file here:", 2026-02-20 03:59:01.655260 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-02-20T03:58:48+00:00-report.json", 2026-02-20 03:59:01.655271 | orchestrator |  "on the following host:", 2026-02-20 03:59:01.655281 | orchestrator |  "testbed-manager" 2026-02-20 03:59:01.655291 | orchestrator |  ] 2026-02-20 03:59:01.655301 | orchestrator | } 2026-02-20 03:59:01.655311 | orchestrator | 2026-02-20 03:59:01.655321 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:59:01.655332 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-20 03:59:01.655344 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 03:59:01.655366 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 03:59:01.941135 | orchestrator | 2026-02-20 03:59:01.941219 | orchestrator | 2026-02-20 03:59:01.941230 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:59:01.941239 | orchestrator | Friday 20 February 2026 03:59:01 +0000 (0:00:00.392) 0:00:14.218 ******* 2026-02-20 03:59:01.941247 | orchestrator | =============================================================================== 2026-02-20 03:59:01.941254 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.02s 2026-02-20 03:59:01.941262 | orchestrator | Write report file ------------------------------------------------------- 1.29s 2026-02-20 03:59:01.941270 | orchestrator | Aggregate test results step one ----------------------------------------- 1.28s 2026-02-20 03:59:01.941277 | orchestrator | Get container info ------------------------------------------------------ 1.11s 2026-02-20 03:59:01.941285 | orchestrator | Create report output directory ------------------------------------------ 0.90s 2026-02-20 03:59:01.941292 | orchestrator | Get timestamp for report file ------------------------------------------- 0.80s 2026-02-20 03:59:01.941299 | orchestrator | Set test result to passed if container is existing ---------------------- 0.44s 2026-02-20 03:59:01.941307 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.43s 2026-02-20 03:59:01.941335 | orchestrator | Print report file information ------------------------------------------- 0.39s 2026-02-20 03:59:01.941343 | orchestrator | Flush handlers ---------------------------------------------------------- 0.38s 2026-02-20 03:59:01.941350 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.38s 2026-02-20 03:59:01.941357 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2026-02-20 03:59:01.941364 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.30s 2026-02-20 03:59:01.941372 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-02-20 03:59:01.941379 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.28s 2026-02-20 03:59:01.941386 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2026-02-20 03:59:01.941394 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2026-02-20 03:59:01.941401 | orchestrator | Fail due to missing containers ------------------------------------------ 0.25s 2026-02-20 03:59:01.941408 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2026-02-20 03:59:01.941416 | orchestrator | Print report file information ------------------------------------------- 0.24s 2026-02-20 03:59:02.207813 | orchestrator | + osism validate ceph-osds 2026-02-20 03:59:23.087429 | orchestrator | 2026-02-20 03:59:23.087578 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-02-20 03:59:23.087606 | orchestrator | 2026-02-20 03:59:23.087627 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-20 03:59:23.087649 | orchestrator | Friday 20 February 2026 03:59:18 +0000 (0:00:00.415) 0:00:00.415 ******* 2026-02-20 03:59:23.087669 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-20 03:59:23.087690 | orchestrator | 2026-02-20 03:59:23.087711 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-20 03:59:23.087731 | orchestrator | Friday 20 February 2026 03:59:19 +0000 (0:00:00.837) 0:00:01.252 ******* 2026-02-20 03:59:23.087751 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-20 03:59:23.087771 | orchestrator | 2026-02-20 03:59:23.087792 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-20 03:59:23.087811 | orchestrator | Friday 20 February 2026 03:59:19 +0000 (0:00:00.509) 0:00:01.761 ******* 2026-02-20 03:59:23.087831 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-20 03:59:23.087889 | orchestrator | 2026-02-20 03:59:23.087925 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-20 03:59:23.087946 | orchestrator | Friday 20 February 2026 03:59:20 +0000 (0:00:00.812) 0:00:02.574 ******* 2026-02-20 03:59:23.087965 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:23.087989 | orchestrator | 2026-02-20 03:59:23.088011 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-20 03:59:23.088032 | orchestrator | Friday 20 February 2026 03:59:20 +0000 (0:00:00.126) 0:00:02.700 ******* 2026-02-20 03:59:23.088055 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:23.088076 | orchestrator | 2026-02-20 03:59:23.088096 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-20 03:59:23.088117 | orchestrator | Friday 20 February 2026 03:59:21 +0000 (0:00:00.134) 0:00:02.834 ******* 2026-02-20 03:59:23.088138 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:23.088157 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:59:23.088175 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:59:23.088192 | orchestrator | 2026-02-20 03:59:23.088232 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-20 03:59:23.088251 | orchestrator | Friday 20 February 2026 03:59:21 +0000 (0:00:00.300) 0:00:03.135 ******* 2026-02-20 03:59:23.088269 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:23.088288 | orchestrator | 2026-02-20 03:59:23.088308 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-20 03:59:23.088357 | orchestrator | Friday 20 February 2026 03:59:21 +0000 (0:00:00.151) 0:00:03.286 ******* 2026-02-20 03:59:23.088377 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:23.088395 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:59:23.088413 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:59:23.088428 | orchestrator | 2026-02-20 03:59:23.088443 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-02-20 03:59:23.088459 | orchestrator | Friday 20 February 2026 03:59:21 +0000 (0:00:00.309) 0:00:03.595 ******* 2026-02-20 03:59:23.088475 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:23.088492 | orchestrator | 2026-02-20 03:59:23.088509 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-20 03:59:23.088526 | orchestrator | Friday 20 February 2026 03:59:22 +0000 (0:00:00.720) 0:00:04.315 ******* 2026-02-20 03:59:23.088542 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:23.088559 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:59:23.088576 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:59:23.088594 | orchestrator | 2026-02-20 03:59:23.088612 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-02-20 03:59:23.088631 | orchestrator | Friday 20 February 2026 03:59:22 +0000 (0:00:00.292) 0:00:04.607 ******* 2026-02-20 03:59:23.088652 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fcbf71ffcddc7546bcff6d871a5e04c5786dff497ef076bbddfba14c4fa58977', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-20 03:59:23.088676 | orchestrator | skipping: [testbed-node-3] => (item={'id': '65cdc2e4a7d221659bdc87383df39d9f93528947dfaff439aa5f95477eeab3ca', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-20 03:59:23.088698 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd78365f79932143ad4ab27337c17c8d2820654df044229cbdbd6f005358047f5', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-20 03:59:23.088717 | orchestrator | skipping: [testbed-node-3] => (item={'id': '501cbca1110bdf3e4d61a16d9f0fc6fdf330bf7a47ea8ce36a327c75ff576a17', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-20 03:59:23.088736 | orchestrator | skipping: [testbed-node-3] => (item={'id': '15c9ee5cc33292650c0d414ed92788a6a0ecc582c4dda04cc2133f66236ab364', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-20 03:59:23.088792 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5c1bd0e0a00729dbe1f9f77a086c8eb4b2c56895219ca01b72071dc3f759949d', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-20 03:59:23.088814 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'af890df82c36feb9a7b98e2611865b9b235607ca8523962ec77e3c3dc12f432f', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-20 03:59:23.088833 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b2ab6207f00c36e8a458fb3707f0bf265149aeeff0d48ea6506096ac07568837', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-02-20 03:59:23.088879 | orchestrator | skipping: [testbed-node-3] => (item={'id': '479b72138cde58427767883ff5ef650189f694eda255b1c7d040d4a4388414d4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-20 03:59:23.088918 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fab859af742378a571fa86ad59ec1e5d66d3e115b79b85e779a88118478f59be', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-20 03:59:23.088940 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7f1e2ca867c09411521e9b6edcff0e1684e9e7f1019c6f66aded8655a848649f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-20 03:59:23.088962 | orchestrator | ok: [testbed-node-3] => (item={'id': '40f9e034a1e8563af2f01a9b5fdaa383637b006fbb8971ff6f37e620dc0122d6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-20 03:59:23.088982 | orchestrator | ok: [testbed-node-3] => (item={'id': 'c04cbd234bbbffe3d6b581fd5f76b5342655f4d6fee802c899bfbaa0bb0389d2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-20 03:59:23.089002 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3620ee4f29623bc3246895382ad5b56cc9632e2e28aca8436cbd1a7e7119707f', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-20 03:59:23.089022 | orchestrator | skipping: [testbed-node-3] => (item={'id': '940f4dde896e6ebc612d31550f7c2c247b73078b486cdee38f01fa0b8f63eb56', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-20 03:59:23.089042 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'feb5132233c01185c294fa4b6941204da5027be6f0fd440a0339ee33650c1327', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-20 03:59:23.089061 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3050d6f2118ef27ea2c518e33bc5edf32c986fcdc560a802a334b76072360316', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-20 03:59:23.089080 | orchestrator | skipping: [testbed-node-3] => (item={'id': '48a65a6366e1d4b8e5823e9ffb6126b67ddca4efa833e7ee9538748b4d66ad9f', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-20 03:59:23.089100 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ef91477b3819d240a589b7665d2b8e1f6a1f76947b28a88f19f20938423a0802', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-20 03:59:23.089121 | orchestrator | skipping: [testbed-node-4] => (item={'id': '766612c8c5a1a09075e25a512db9ac75a61a4403e1c5be60b7be575594d20e23', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-20 03:59:23.089155 | orchestrator | skipping: [testbed-node-4] => (item={'id': '51870291b22de464e75890e419ce532e297aa382f596d5912ad3bbcb11464631', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-20 03:59:23.326715 | orchestrator | skipping: [testbed-node-4] => (item={'id': '184ad3cefbf8d6de008097c95d250f743e15d5d0eec8baee104cc61acd015ef3', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-20 03:59:23.326885 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5997d00f86dcd2fc46614c0e021908877918be1e60f99cbc4f5b19ab4fa10802', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-20 03:59:23.326922 | orchestrator | skipping: [testbed-node-4] => (item={'id': '575325cd612c13476d42603c021f46782bbc097bc89a12c61f697eb4d5a6019e', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-20 03:59:23.326936 | orchestrator | skipping: [testbed-node-4] => (item={'id': '07406dd6fafa97068b06ba866ff32befaaff778d62af1c02c603752c4c1a843d', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-20 03:59:23.326951 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2986bb40c8ce90f477f49b2c805c82085ab7c5a613c8d07cb9275b0a0a6ff941', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-20 03:59:23.326962 | orchestrator | skipping: [testbed-node-4] => (item={'id': '12e6fa2f2d68af25696b21ed7dcdd72cae458eaaa2f2596941232c2e61224cec', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-02-20 03:59:23.326973 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd8ea9149187a48816efa9fcf29540f886076143e542318fa37b81ee991b4bdec', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-20 03:59:23.326984 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f0b862157fb9c152920a1760eaac61638657bdeae17c71bec37a5f4656bfd1ac', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-20 03:59:23.326994 | orchestrator | skipping: [testbed-node-4] => (item={'id': '341f97ca01c35832a5acc9728f6bb2c111ba673dcbe54ac205bb3c963d81f815', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-20 03:59:23.327006 | orchestrator | ok: [testbed-node-4] => (item={'id': '7c9a1f6f8f1c9243d6bdc5b02140c5f3554ab14b259d189a6f783855b58b85cc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-20 03:59:23.327017 | orchestrator | ok: [testbed-node-4] => (item={'id': 'ea8837b21daaf7156c951626dfeebd3b36929a1de4015e3a937ff9b697d1d342', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-20 03:59:23.327028 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c724121aa6e92083c458f2ccdf6bf37e2239a647b0ca7b8c5edf1a80e988c974', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-20 03:59:23.327051 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bcb049392e31b0b54c89352b647883e1318ed1d02304c63441fd0b7b6c38c435', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-20 03:59:23.327072 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3c7c35a58ce5e0f959fedb1e63e505520daea04ad8586d233ae55bd9fac086cd', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-20 03:59:23.327099 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd662f5ce7c4175d19dca4f21962e796e918aef815a9ec81ec08d258c056819e3', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-20 03:59:23.327117 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2e216697da791ddfa1a9132678197e628667f4e458f0f3c5326b9d1319ea4743', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-20 03:59:23.327127 | orchestrator | skipping: [testbed-node-4] => (item={'id': '233580b823ed3067c5faad71c5f0ee3939d683784245fa4a74431532abc3b35c', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-20 03:59:23.327138 | orchestrator | skipping: [testbed-node-5] => (item={'id': '67dac22b1a11e189d9a2e597b37e7500d1f733d4c7498bedd480b2514c39abff', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-20 03:59:23.327148 | orchestrator | skipping: [testbed-node-5] => (item={'id': '12dae9da433338c29fef5663c65ea7d00331bab610542f02ff9519d9f1c55c6e', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-20 03:59:23.327163 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6d137320cbf2d07a4696c4ab261d280cbae264854f2a040e13e87d5bd822ea45', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-20 03:59:23.327173 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1450e4f9dd4117d993650ea286025a69a44186edb46c6b2ba705e4b6fa5df473', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-20 03:59:23.327183 | orchestrator | skipping: [testbed-node-5] => (item={'id': '99352a551e529fbf96b1800704774794cafacfa804aacd99bc4a81df19049a94', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-20 03:59:23.327193 | orchestrator | skipping: [testbed-node-5] => (item={'id': '23bd703dfd0f141346ebcbb19e14b5df5c512fda4dfed7b3ab88e9fee395cf18', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-20 03:59:23.327203 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eb9859dbeb88981601154cf34558484960bd40d8760ba35fa8d42302f8df38f4', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-20 03:59:23.327213 | orchestrator | skipping: [testbed-node-5] => (item={'id': '12cd9d7a3b7e55b009832bb385b50c084f128acf589521cdb981db6a0671900e', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-02-20 03:59:23.327223 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3bc694dab38b573b9e4a6060b1a87eb68a7c3aff6b749c04054136127883b718', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-20 03:59:23.327233 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0b9c3d766f926b22d964894d639a337adb8afbeb8e9d0c9cf43822ba025c65d2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-20 03:59:23.327243 | orchestrator | skipping: [testbed-node-5] => (item={'id': '04d6227bffa452d6ff689f37bb999ee147f4cc044a84b18d41fea07432cca161', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-20 03:59:23.327260 | orchestrator | ok: [testbed-node-5] => (item={'id': '90169e77aa23192abbab067a9f913c2910154bd97bd4077614687c15e8d21ab0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-20 03:59:23.327278 | orchestrator | ok: [testbed-node-5] => (item={'id': 'd87e0850a49d92c8b3a76ab6596770b4291101123b764680af16110b0b14fb69', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-20 03:59:34.070360 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2ec75e1d1e5a940669d685d80be1b9f9570f83e944fc72c5e1837fff5f40ca0c', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-20 03:59:34.070479 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9d04bdc1b3cb780308f72658255ad5fcd34d4bd436e605db54cc46c25930de68', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-20 03:59:34.070497 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b9b9b141d4edf7df9cbecb0f9fb9fb407e1142becac20b7cd38a8f6be78ecda8', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-20 03:59:34.070512 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3d61d01e4ed3a409daff46100a430e24c2c35eb7f3812a9ae466bd54d96830ae', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-20 03:59:34.070540 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd741bc4b45b257619e71403d3de42f0a70b1171516185887835c9a4de6d0ccda', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-20 03:59:34.070553 | orchestrator | skipping: [testbed-node-5] => (item={'id': '602002edfed942022e46321079bfda3acaec0d57a65eaa0d908876b43c93612c', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-20 03:59:34.070565 | orchestrator | 2026-02-20 03:59:34.070578 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-02-20 03:59:34.070590 | orchestrator | Friday 20 February 2026 03:59:23 +0000 (0:00:00.491) 0:00:05.099 ******* 2026-02-20 03:59:34.070601 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:34.070613 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:59:34.070624 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:59:34.070635 | orchestrator | 2026-02-20 03:59:34.070646 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-02-20 03:59:34.070657 | orchestrator | Friday 20 February 2026 03:59:23 +0000 (0:00:00.272) 0:00:05.372 ******* 2026-02-20 03:59:34.070668 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:34.070680 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:59:34.070691 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:59:34.070702 | orchestrator | 2026-02-20 03:59:34.070713 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-02-20 03:59:34.070724 | orchestrator | Friday 20 February 2026 03:59:24 +0000 (0:00:00.426) 0:00:05.799 ******* 2026-02-20 03:59:34.070735 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:34.070745 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:59:34.070756 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:59:34.070767 | orchestrator | 2026-02-20 03:59:34.070778 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-20 03:59:34.070789 | orchestrator | Friday 20 February 2026 03:59:24 +0000 (0:00:00.300) 0:00:06.100 ******* 2026-02-20 03:59:34.070800 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:34.070811 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:59:34.070872 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:59:34.070885 | orchestrator | 2026-02-20 03:59:34.070896 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-02-20 03:59:34.070910 | orchestrator | Friday 20 February 2026 03:59:24 +0000 (0:00:00.279) 0:00:06.379 ******* 2026-02-20 03:59:34.070923 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-02-20 03:59:34.070937 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-02-20 03:59:34.070950 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:34.070962 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-02-20 03:59:34.070975 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-02-20 03:59:34.070988 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:59:34.071000 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-02-20 03:59:34.071013 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-02-20 03:59:34.071025 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:59:34.071037 | orchestrator | 2026-02-20 03:59:34.071050 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-02-20 03:59:34.071063 | orchestrator | Friday 20 February 2026 03:59:24 +0000 (0:00:00.305) 0:00:06.685 ******* 2026-02-20 03:59:34.071075 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:34.071088 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:59:34.071101 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:59:34.071112 | orchestrator | 2026-02-20 03:59:34.071124 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-20 03:59:34.071135 | orchestrator | Friday 20 February 2026 03:59:25 +0000 (0:00:00.462) 0:00:07.148 ******* 2026-02-20 03:59:34.071146 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:34.071176 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:59:34.071188 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:59:34.071199 | orchestrator | 2026-02-20 03:59:34.071210 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-20 03:59:34.071221 | orchestrator | Friday 20 February 2026 03:59:25 +0000 (0:00:00.283) 0:00:07.431 ******* 2026-02-20 03:59:34.071232 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:34.071243 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:59:34.071254 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:59:34.071264 | orchestrator | 2026-02-20 03:59:34.071275 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-02-20 03:59:34.071286 | orchestrator | Friday 20 February 2026 03:59:25 +0000 (0:00:00.274) 0:00:07.706 ******* 2026-02-20 03:59:34.071297 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:34.071308 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:59:34.071319 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:59:34.071330 | orchestrator | 2026-02-20 03:59:34.071341 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-20 03:59:34.071368 | orchestrator | Friday 20 February 2026 03:59:26 +0000 (0:00:00.286) 0:00:07.992 ******* 2026-02-20 03:59:34.071380 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:34.071401 | orchestrator | 2026-02-20 03:59:34.071413 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-20 03:59:34.071423 | orchestrator | Friday 20 February 2026 03:59:26 +0000 (0:00:00.588) 0:00:08.580 ******* 2026-02-20 03:59:34.071434 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:34.071445 | orchestrator | 2026-02-20 03:59:34.071456 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-20 03:59:34.071467 | orchestrator | Friday 20 February 2026 03:59:27 +0000 (0:00:00.240) 0:00:08.821 ******* 2026-02-20 03:59:34.071478 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:34.071489 | orchestrator | 2026-02-20 03:59:34.071500 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:59:34.071519 | orchestrator | Friday 20 February 2026 03:59:27 +0000 (0:00:00.246) 0:00:09.067 ******* 2026-02-20 03:59:34.071530 | orchestrator | 2026-02-20 03:59:34.071541 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:59:34.071552 | orchestrator | Friday 20 February 2026 03:59:27 +0000 (0:00:00.069) 0:00:09.136 ******* 2026-02-20 03:59:34.071562 | orchestrator | 2026-02-20 03:59:34.071573 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:59:34.071584 | orchestrator | Friday 20 February 2026 03:59:27 +0000 (0:00:00.068) 0:00:09.204 ******* 2026-02-20 03:59:34.071595 | orchestrator | 2026-02-20 03:59:34.071606 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-20 03:59:34.071617 | orchestrator | Friday 20 February 2026 03:59:27 +0000 (0:00:00.070) 0:00:09.274 ******* 2026-02-20 03:59:34.071628 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:34.071639 | orchestrator | 2026-02-20 03:59:34.071650 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-02-20 03:59:34.071661 | orchestrator | Friday 20 February 2026 03:59:27 +0000 (0:00:00.247) 0:00:09.522 ******* 2026-02-20 03:59:34.071672 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:34.071682 | orchestrator | 2026-02-20 03:59:34.071693 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-20 03:59:34.071704 | orchestrator | Friday 20 February 2026 03:59:27 +0000 (0:00:00.251) 0:00:09.773 ******* 2026-02-20 03:59:34.071715 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:34.071726 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:59:34.071737 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:59:34.071748 | orchestrator | 2026-02-20 03:59:34.071759 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-02-20 03:59:34.071770 | orchestrator | Friday 20 February 2026 03:59:28 +0000 (0:00:00.283) 0:00:10.057 ******* 2026-02-20 03:59:34.071781 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:34.071792 | orchestrator | 2026-02-20 03:59:34.071802 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-02-20 03:59:34.071813 | orchestrator | Friday 20 February 2026 03:59:28 +0000 (0:00:00.599) 0:00:10.657 ******* 2026-02-20 03:59:34.071824 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 03:59:34.071835 | orchestrator | 2026-02-20 03:59:34.071864 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-02-20 03:59:34.071875 | orchestrator | Friday 20 February 2026 03:59:30 +0000 (0:00:01.638) 0:00:12.295 ******* 2026-02-20 03:59:34.071886 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:34.071897 | orchestrator | 2026-02-20 03:59:34.071907 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-02-20 03:59:34.071918 | orchestrator | Friday 20 February 2026 03:59:30 +0000 (0:00:00.131) 0:00:12.427 ******* 2026-02-20 03:59:34.071929 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:34.071940 | orchestrator | 2026-02-20 03:59:34.071951 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-02-20 03:59:34.071962 | orchestrator | Friday 20 February 2026 03:59:30 +0000 (0:00:00.306) 0:00:12.733 ******* 2026-02-20 03:59:34.071973 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:34.071984 | orchestrator | 2026-02-20 03:59:34.071994 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-02-20 03:59:34.072005 | orchestrator | Friday 20 February 2026 03:59:31 +0000 (0:00:00.127) 0:00:12.860 ******* 2026-02-20 03:59:34.072016 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:34.072027 | orchestrator | 2026-02-20 03:59:34.072038 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-20 03:59:34.072049 | orchestrator | Friday 20 February 2026 03:59:31 +0000 (0:00:00.137) 0:00:12.998 ******* 2026-02-20 03:59:34.072060 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:34.072071 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:59:34.072082 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:59:34.072099 | orchestrator | 2026-02-20 03:59:34.072110 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-02-20 03:59:34.072120 | orchestrator | Friday 20 February 2026 03:59:31 +0000 (0:00:00.287) 0:00:13.285 ******* 2026-02-20 03:59:34.072131 | orchestrator | changed: [testbed-node-3] 2026-02-20 03:59:34.072142 | orchestrator | changed: [testbed-node-4] 2026-02-20 03:59:34.072153 | orchestrator | changed: [testbed-node-5] 2026-02-20 03:59:43.756213 | orchestrator | 2026-02-20 03:59:43.756353 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-02-20 03:59:43.756368 | orchestrator | Friday 20 February 2026 03:59:34 +0000 (0:00:02.557) 0:00:15.843 ******* 2026-02-20 03:59:43.756377 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:43.756386 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:59:43.756394 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:59:43.756402 | orchestrator | 2026-02-20 03:59:43.756410 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-02-20 03:59:43.756419 | orchestrator | Friday 20 February 2026 03:59:34 +0000 (0:00:00.302) 0:00:16.145 ******* 2026-02-20 03:59:43.756444 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:43.756452 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:59:43.756468 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:59:43.756476 | orchestrator | 2026-02-20 03:59:43.756485 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-02-20 03:59:43.756493 | orchestrator | Friday 20 February 2026 03:59:34 +0000 (0:00:00.488) 0:00:16.634 ******* 2026-02-20 03:59:43.756501 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:43.756510 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:59:43.756518 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:59:43.756526 | orchestrator | 2026-02-20 03:59:43.756535 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-02-20 03:59:43.756543 | orchestrator | Friday 20 February 2026 03:59:35 +0000 (0:00:00.307) 0:00:16.941 ******* 2026-02-20 03:59:43.756551 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:43.756559 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:59:43.756567 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:59:43.756575 | orchestrator | 2026-02-20 03:59:43.756583 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-02-20 03:59:43.756595 | orchestrator | Friday 20 February 2026 03:59:35 +0000 (0:00:00.521) 0:00:17.463 ******* 2026-02-20 03:59:43.756603 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:43.756611 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:59:43.756619 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:59:43.756627 | orchestrator | 2026-02-20 03:59:43.756636 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-02-20 03:59:43.756644 | orchestrator | Friday 20 February 2026 03:59:35 +0000 (0:00:00.289) 0:00:17.752 ******* 2026-02-20 03:59:43.756652 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:43.756660 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:59:43.756668 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:59:43.756676 | orchestrator | 2026-02-20 03:59:43.756684 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-20 03:59:43.756692 | orchestrator | Friday 20 February 2026 03:59:36 +0000 (0:00:00.289) 0:00:18.041 ******* 2026-02-20 03:59:43.756700 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:43.756708 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:59:43.756716 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:59:43.756724 | orchestrator | 2026-02-20 03:59:43.756732 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-02-20 03:59:43.756740 | orchestrator | Friday 20 February 2026 03:59:36 +0000 (0:00:00.483) 0:00:18.524 ******* 2026-02-20 03:59:43.756748 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:43.756757 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:59:43.756766 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:59:43.756780 | orchestrator | 2026-02-20 03:59:43.756795 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-02-20 03:59:43.756859 | orchestrator | Friday 20 February 2026 03:59:37 +0000 (0:00:00.715) 0:00:19.240 ******* 2026-02-20 03:59:43.756871 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:43.756880 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:59:43.756888 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:59:43.756897 | orchestrator | 2026-02-20 03:59:43.756906 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-02-20 03:59:43.756915 | orchestrator | Friday 20 February 2026 03:59:37 +0000 (0:00:00.288) 0:00:19.529 ******* 2026-02-20 03:59:43.756924 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:43.756933 | orchestrator | skipping: [testbed-node-4] 2026-02-20 03:59:43.756942 | orchestrator | skipping: [testbed-node-5] 2026-02-20 03:59:43.756951 | orchestrator | 2026-02-20 03:59:43.756960 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-02-20 03:59:43.756969 | orchestrator | Friday 20 February 2026 03:59:38 +0000 (0:00:00.292) 0:00:19.821 ******* 2026-02-20 03:59:43.756978 | orchestrator | ok: [testbed-node-3] 2026-02-20 03:59:43.756988 | orchestrator | ok: [testbed-node-4] 2026-02-20 03:59:43.756997 | orchestrator | ok: [testbed-node-5] 2026-02-20 03:59:43.757006 | orchestrator | 2026-02-20 03:59:43.757015 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-20 03:59:43.757023 | orchestrator | Friday 20 February 2026 03:59:38 +0000 (0:00:00.492) 0:00:20.314 ******* 2026-02-20 03:59:43.757032 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-20 03:59:43.757042 | orchestrator | 2026-02-20 03:59:43.757051 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-20 03:59:43.757060 | orchestrator | Friday 20 February 2026 03:59:38 +0000 (0:00:00.272) 0:00:20.587 ******* 2026-02-20 03:59:43.757069 | orchestrator | skipping: [testbed-node-3] 2026-02-20 03:59:43.757078 | orchestrator | 2026-02-20 03:59:43.757087 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-20 03:59:43.757096 | orchestrator | Friday 20 February 2026 03:59:39 +0000 (0:00:00.285) 0:00:20.872 ******* 2026-02-20 03:59:43.757106 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-20 03:59:43.757115 | orchestrator | 2026-02-20 03:59:43.757124 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-20 03:59:43.757132 | orchestrator | Friday 20 February 2026 03:59:40 +0000 (0:00:01.652) 0:00:22.525 ******* 2026-02-20 03:59:43.757139 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-20 03:59:43.757148 | orchestrator | 2026-02-20 03:59:43.757156 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-20 03:59:43.757164 | orchestrator | Friday 20 February 2026 03:59:41 +0000 (0:00:00.272) 0:00:22.797 ******* 2026-02-20 03:59:43.757172 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-20 03:59:43.757180 | orchestrator | 2026-02-20 03:59:43.757204 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:59:43.757213 | orchestrator | Friday 20 February 2026 03:59:41 +0000 (0:00:00.251) 0:00:23.049 ******* 2026-02-20 03:59:43.757221 | orchestrator | 2026-02-20 03:59:43.757229 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:59:43.757237 | orchestrator | Friday 20 February 2026 03:59:41 +0000 (0:00:00.067) 0:00:23.117 ******* 2026-02-20 03:59:43.757244 | orchestrator | 2026-02-20 03:59:43.757252 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-20 03:59:43.757260 | orchestrator | Friday 20 February 2026 03:59:41 +0000 (0:00:00.068) 0:00:23.185 ******* 2026-02-20 03:59:43.757268 | orchestrator | 2026-02-20 03:59:43.757276 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-20 03:59:43.757283 | orchestrator | Friday 20 February 2026 03:59:41 +0000 (0:00:00.070) 0:00:23.256 ******* 2026-02-20 03:59:43.757292 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-20 03:59:43.757299 | orchestrator | 2026-02-20 03:59:43.757307 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-20 03:59:43.757321 | orchestrator | Friday 20 February 2026 03:59:42 +0000 (0:00:01.467) 0:00:24.723 ******* 2026-02-20 03:59:43.757329 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-02-20 03:59:43.757337 | orchestrator |  "msg": [ 2026-02-20 03:59:43.757345 | orchestrator |  "Validator run completed.", 2026-02-20 03:59:43.757353 | orchestrator |  "You can find the report file here:", 2026-02-20 03:59:43.757361 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-02-20T03:59:19+00:00-report.json", 2026-02-20 03:59:43.757374 | orchestrator |  "on the following host:", 2026-02-20 03:59:43.757383 | orchestrator |  "testbed-manager" 2026-02-20 03:59:43.757391 | orchestrator |  ] 2026-02-20 03:59:43.757399 | orchestrator | } 2026-02-20 03:59:43.757407 | orchestrator | 2026-02-20 03:59:43.757415 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 03:59:43.757424 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-20 03:59:43.757433 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-20 03:59:43.757441 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-20 03:59:43.757449 | orchestrator | 2026-02-20 03:59:43.757457 | orchestrator | 2026-02-20 03:59:43.757465 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 03:59:43.757473 | orchestrator | Friday 20 February 2026 03:59:43 +0000 (0:00:00.550) 0:00:25.273 ******* 2026-02-20 03:59:43.757481 | orchestrator | =============================================================================== 2026-02-20 03:59:43.757492 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.56s 2026-02-20 03:59:43.757506 | orchestrator | Aggregate test results step one ----------------------------------------- 1.65s 2026-02-20 03:59:43.757518 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.64s 2026-02-20 03:59:43.757529 | orchestrator | Write report file ------------------------------------------------------- 1.47s 2026-02-20 03:59:43.757540 | orchestrator | Get timestamp for report file ------------------------------------------- 0.84s 2026-02-20 03:59:43.757553 | orchestrator | Create report output directory ------------------------------------------ 0.81s 2026-02-20 03:59:43.757565 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.72s 2026-02-20 03:59:43.757577 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.72s 2026-02-20 03:59:43.757589 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.60s 2026-02-20 03:59:43.757602 | orchestrator | Aggregate test results step one ----------------------------------------- 0.59s 2026-02-20 03:59:43.757615 | orchestrator | Print report file information ------------------------------------------- 0.55s 2026-02-20 03:59:43.757626 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.52s 2026-02-20 03:59:43.757635 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.51s 2026-02-20 03:59:43.757642 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.49s 2026-02-20 03:59:43.757650 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.49s 2026-02-20 03:59:43.757658 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2026-02-20 03:59:43.757666 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2026-02-20 03:59:43.757674 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.46s 2026-02-20 03:59:43.757682 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.43s 2026-02-20 03:59:43.757690 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.31s 2026-02-20 03:59:44.025594 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-02-20 03:59:44.031304 | orchestrator | + set -e 2026-02-20 03:59:44.031380 | orchestrator | + source /opt/manager-vars.sh 2026-02-20 03:59:44.031390 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-20 03:59:44.031397 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-20 03:59:44.031404 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-20 03:59:44.032059 | orchestrator | ++ CEPH_VERSION=reef 2026-02-20 03:59:44.032152 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-20 03:59:44.032177 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-20 03:59:44.032194 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-20 03:59:44.032212 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-20 03:59:44.032229 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-20 03:59:44.032246 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-20 03:59:44.032263 | orchestrator | ++ export ARA=false 2026-02-20 03:59:44.032280 | orchestrator | ++ ARA=false 2026-02-20 03:59:44.032297 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-20 03:59:44.032314 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-20 03:59:44.032329 | orchestrator | ++ export TEMPEST=false 2026-02-20 03:59:44.032347 | orchestrator | ++ TEMPEST=false 2026-02-20 03:59:44.032364 | orchestrator | ++ export IS_ZUUL=true 2026-02-20 03:59:44.032382 | orchestrator | ++ IS_ZUUL=true 2026-02-20 03:59:44.032398 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 03:59:44.032415 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 03:59:44.032432 | orchestrator | ++ export EXTERNAL_API=false 2026-02-20 03:59:44.032448 | orchestrator | ++ EXTERNAL_API=false 2026-02-20 03:59:44.032465 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-20 03:59:44.032479 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-20 03:59:44.032493 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-20 03:59:44.032507 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-20 03:59:44.032520 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-20 03:59:44.032533 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-20 03:59:44.032546 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-20 03:59:44.032559 | orchestrator | + source /etc/os-release 2026-02-20 03:59:44.032572 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-02-20 03:59:44.032586 | orchestrator | ++ NAME=Ubuntu 2026-02-20 03:59:44.032601 | orchestrator | ++ VERSION_ID=24.04 2026-02-20 03:59:44.032614 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-02-20 03:59:44.032628 | orchestrator | ++ VERSION_CODENAME=noble 2026-02-20 03:59:44.032642 | orchestrator | ++ ID=ubuntu 2026-02-20 03:59:44.032655 | orchestrator | ++ ID_LIKE=debian 2026-02-20 03:59:44.032670 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-02-20 03:59:44.032683 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-02-20 03:59:44.032697 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-02-20 03:59:44.032712 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-02-20 03:59:44.032728 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-02-20 03:59:44.032744 | orchestrator | ++ LOGO=ubuntu-logo 2026-02-20 03:59:44.032758 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-02-20 03:59:44.032774 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-02-20 03:59:44.032790 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-20 03:59:44.052172 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-20 04:00:04.523414 | orchestrator | 2026-02-20 04:00:04.523531 | orchestrator | # Status of Elasticsearch 2026-02-20 04:00:04.523544 | orchestrator | 2026-02-20 04:00:04.523553 | orchestrator | + pushd /opt/configuration/contrib 2026-02-20 04:00:04.523562 | orchestrator | + echo 2026-02-20 04:00:04.523570 | orchestrator | + echo '# Status of Elasticsearch' 2026-02-20 04:00:04.523578 | orchestrator | + echo 2026-02-20 04:00:04.523586 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-02-20 04:00:04.715745 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-02-20 04:00:04.715910 | orchestrator | + echo 2026-02-20 04:00:04.715954 | orchestrator | 2026-02-20 04:00:04.715979 | orchestrator | # Status of MariaDB 2026-02-20 04:00:04.716002 | orchestrator | 2026-02-20 04:00:04.716060 | orchestrator | + echo '# Status of MariaDB' 2026-02-20 04:00:04.716083 | orchestrator | + echo 2026-02-20 04:00:04.716445 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-20 04:00:04.779174 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-20 04:00:04.779272 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-20 04:00:04.779287 | orchestrator | + MARIADB_USER=root_shard_0 2026-02-20 04:00:04.779316 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-02-20 04:00:04.846641 | orchestrator | Reading package lists... 2026-02-20 04:00:05.173247 | orchestrator | Building dependency tree... 2026-02-20 04:00:05.173675 | orchestrator | Reading state information... 2026-02-20 04:00:05.524952 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-02-20 04:00:05.525097 | orchestrator | bc set to manually installed. 2026-02-20 04:00:05.525115 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-02-20 04:00:06.163558 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-02-20 04:00:06.164033 | orchestrator | 2026-02-20 04:00:06.164081 | orchestrator | # Status of Prometheus 2026-02-20 04:00:06.164099 | orchestrator | 2026-02-20 04:00:06.164110 | orchestrator | + echo 2026-02-20 04:00:06.164121 | orchestrator | + echo '# Status of Prometheus' 2026-02-20 04:00:06.164131 | orchestrator | + echo 2026-02-20 04:00:06.164142 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-02-20 04:00:06.212628 | orchestrator | Unauthorized 2026-02-20 04:00:06.215182 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-02-20 04:00:06.276516 | orchestrator | Unauthorized 2026-02-20 04:00:06.279538 | orchestrator | 2026-02-20 04:00:06.279614 | orchestrator | # Status of RabbitMQ 2026-02-20 04:00:06.279635 | orchestrator | 2026-02-20 04:00:06.279654 | orchestrator | + echo 2026-02-20 04:00:06.279673 | orchestrator | + echo '# Status of RabbitMQ' 2026-02-20 04:00:06.279689 | orchestrator | + echo 2026-02-20 04:00:06.280005 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-20 04:00:06.324108 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-20 04:00:06.324220 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-20 04:00:06.324246 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-02-20 04:00:06.758804 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-02-20 04:00:06.767044 | orchestrator | 2026-02-20 04:00:06.767131 | orchestrator | # Status of Redis 2026-02-20 04:00:06.767143 | orchestrator | 2026-02-20 04:00:06.767153 | orchestrator | + echo 2026-02-20 04:00:06.767163 | orchestrator | + echo '# Status of Redis' 2026-02-20 04:00:06.767173 | orchestrator | + echo 2026-02-20 04:00:06.767184 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-02-20 04:00:06.773390 | orchestrator | TCP OK - 0.003 second response time on 192.168.16.10 port 6379|time=0.003245s;;;0.000000;10.000000 2026-02-20 04:00:06.773496 | orchestrator | 2026-02-20 04:00:06.773513 | orchestrator | # Create backup of MariaDB database 2026-02-20 04:00:06.773526 | orchestrator | + popd 2026-02-20 04:00:06.773538 | orchestrator | + echo 2026-02-20 04:00:06.773551 | orchestrator | + echo '# Create backup of MariaDB database' 2026-02-20 04:00:06.773562 | orchestrator | + echo 2026-02-20 04:00:06.773573 | orchestrator | 2026-02-20 04:00:06.773586 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-02-20 04:00:08.740162 | orchestrator | 2026-02-20 04:00:08 | INFO  | Task c5a70eae-ca79-4f64-9c41-18cf211697c1 (mariadb_backup) was prepared for execution. 2026-02-20 04:00:08.740271 | orchestrator | 2026-02-20 04:00:08 | INFO  | It takes a moment until task c5a70eae-ca79-4f64-9c41-18cf211697c1 (mariadb_backup) has been started and output is visible here. 2026-02-20 04:03:23.803872 | orchestrator | 2026-02-20 04:03:23.804010 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 04:03:23.804034 | orchestrator | 2026-02-20 04:03:23.804052 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 04:03:23.804070 | orchestrator | Friday 20 February 2026 04:00:12 +0000 (0:00:00.169) 0:00:00.169 ******* 2026-02-20 04:03:23.804087 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:03:23.804104 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:03:23.804119 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:03:23.804135 | orchestrator | 2026-02-20 04:03:23.804183 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 04:03:23.804203 | orchestrator | Friday 20 February 2026 04:00:13 +0000 (0:00:00.326) 0:00:00.495 ******* 2026-02-20 04:03:23.804220 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-20 04:03:23.804238 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-20 04:03:23.804256 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-20 04:03:23.804273 | orchestrator | 2026-02-20 04:03:23.804290 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-20 04:03:23.804308 | orchestrator | 2026-02-20 04:03:23.804326 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-20 04:03:23.804344 | orchestrator | Friday 20 February 2026 04:00:13 +0000 (0:00:00.548) 0:00:01.043 ******* 2026-02-20 04:03:23.804364 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 04:03:23.804384 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-20 04:03:23.804404 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-20 04:03:23.804423 | orchestrator | 2026-02-20 04:03:23.804443 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-20 04:03:23.804464 | orchestrator | Friday 20 February 2026 04:00:14 +0000 (0:00:00.373) 0:00:01.417 ******* 2026-02-20 04:03:23.804485 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:03:23.804507 | orchestrator | 2026-02-20 04:03:23.804528 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-02-20 04:03:23.804566 | orchestrator | Friday 20 February 2026 04:00:14 +0000 (0:00:00.528) 0:00:01.946 ******* 2026-02-20 04:03:23.804589 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:03:23.804609 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:03:23.804629 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:03:23.804647 | orchestrator | 2026-02-20 04:03:23.804668 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-02-20 04:03:23.804688 | orchestrator | Friday 20 February 2026 04:00:17 +0000 (0:00:03.172) 0:00:05.119 ******* 2026-02-20 04:03:23.804738 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:03:23.804760 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:03:23.804778 | orchestrator | 2026-02-20 04:03:23.804796 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-02-20 04:03:23.804814 | orchestrator | 2026-02-20 04:03:23.804832 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-02-20 04:03:23.804848 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-20 04:03:23.804865 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-20 04:03:23.804881 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-20 04:03:23.804896 | orchestrator | mariadb_bootstrap_restart 2026-02-20 04:03:23.804912 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:03:23.804928 | orchestrator | 2026-02-20 04:03:23.804943 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-20 04:03:23.804960 | orchestrator | skipping: no hosts matched 2026-02-20 04:03:23.804976 | orchestrator | 2026-02-20 04:03:23.804992 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-20 04:03:23.805008 | orchestrator | skipping: no hosts matched 2026-02-20 04:03:23.805024 | orchestrator | 2026-02-20 04:03:23.805040 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-20 04:03:23.805055 | orchestrator | skipping: no hosts matched 2026-02-20 04:03:23.805070 | orchestrator | 2026-02-20 04:03:23.805086 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-20 04:03:23.805102 | orchestrator | 2026-02-20 04:03:23.805119 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-20 04:03:23.805151 | orchestrator | Friday 20 February 2026 04:03:22 +0000 (0:03:05.114) 0:03:10.233 ******* 2026-02-20 04:03:23.805187 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:03:23.805205 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:03:23.805222 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:03:23.805236 | orchestrator | 2026-02-20 04:03:23.805251 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-20 04:03:23.805265 | orchestrator | Friday 20 February 2026 04:03:23 +0000 (0:00:00.295) 0:03:10.528 ******* 2026-02-20 04:03:23.805281 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:03:23.805298 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:03:23.805314 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:03:23.805330 | orchestrator | 2026-02-20 04:03:23.805347 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:03:23.805366 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 04:03:23.805384 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-20 04:03:23.805395 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-20 04:03:23.805405 | orchestrator | 2026-02-20 04:03:23.805415 | orchestrator | 2026-02-20 04:03:23.805424 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 04:03:23.805434 | orchestrator | Friday 20 February 2026 04:03:23 +0000 (0:00:00.358) 0:03:10.886 ******* 2026-02-20 04:03:23.805467 | orchestrator | =============================================================================== 2026-02-20 04:03:23.805477 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 185.11s 2026-02-20 04:03:23.805487 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.17s 2026-02-20 04:03:23.805496 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2026-02-20 04:03:23.805506 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.53s 2026-02-20 04:03:23.805515 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.37s 2026-02-20 04:03:23.805525 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.36s 2026-02-20 04:03:23.805535 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-02-20 04:03:23.805544 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2026-02-20 04:03:24.081792 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-02-20 04:03:24.087946 | orchestrator | + set -e 2026-02-20 04:03:24.088199 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-20 04:03:24.088235 | orchestrator | ++ export INTERACTIVE=false 2026-02-20 04:03:24.088249 | orchestrator | ++ INTERACTIVE=false 2026-02-20 04:03:24.088260 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-20 04:03:24.088271 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-20 04:03:24.088282 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-20 04:03:24.089864 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-20 04:03:24.096040 | orchestrator | 2026-02-20 04:03:24.096100 | orchestrator | # OpenStack endpoints 2026-02-20 04:03:24.096113 | orchestrator | 2026-02-20 04:03:24.096126 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-20 04:03:24.096193 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-20 04:03:24.096211 | orchestrator | + export OS_CLOUD=admin 2026-02-20 04:03:24.096230 | orchestrator | + OS_CLOUD=admin 2026-02-20 04:03:24.096248 | orchestrator | + echo 2026-02-20 04:03:24.096265 | orchestrator | + echo '# OpenStack endpoints' 2026-02-20 04:03:24.096282 | orchestrator | + echo 2026-02-20 04:03:24.096300 | orchestrator | + openstack endpoint list 2026-02-20 04:03:27.288890 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-20 04:03:27.288990 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-02-20 04:03:27.289027 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-20 04:03:27.289056 | orchestrator | | 090ff22608004349be8db62c7f3b4888 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-02-20 04:03:27.289068 | orchestrator | | 143ced863dd0439d9aeedeebd7924913 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-02-20 04:03:27.289078 | orchestrator | | 17a7c412f52e40ca890a7b389a3b301f | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-02-20 04:03:27.289088 | orchestrator | | 20527dde336c449a9b920fd618464cba | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-20 04:03:27.289098 | orchestrator | | 2d4faca24e354fa6a762b4aa60927a93 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-02-20 04:03:27.289108 | orchestrator | | 2e729a61eff74205bbc2bee8b6d60d4b | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-02-20 04:03:27.289118 | orchestrator | | 2faae0054aad4e67bb14df76f3b4849f | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-02-20 04:03:27.289127 | orchestrator | | 36be545d0a0343daafd440a094a9e845 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-02-20 04:03:27.289138 | orchestrator | | 3e104e5c831b48c98410a906e184acab | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-02-20 04:03:27.289147 | orchestrator | | 4dd93865a1024cf2a56b578b81e23253 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-02-20 04:03:27.289157 | orchestrator | | 53c443ca0f0642bdb1ac3dbf947da779 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-02-20 04:03:27.289167 | orchestrator | | 54acc9169ec048b58ac41038ceaaa6c2 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-02-20 04:03:27.289177 | orchestrator | | 61b8b29af1f54f638205d90820ad0e87 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-02-20 04:03:27.289187 | orchestrator | | 64eaee01774b46c89553faf94747933b | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-02-20 04:03:27.289197 | orchestrator | | 655ad983d98346fb8c080d8c8e0b0310 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-20 04:03:27.289207 | orchestrator | | 6af631b366e54ed29e729aed533d46e1 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-02-20 04:03:27.289217 | orchestrator | | 74c95a4bba274ccf8b417bddf7de4a8a | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-02-20 04:03:27.289226 | orchestrator | | 856d7c8c498e4723a2838b7b17f60b12 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-20 04:03:27.289236 | orchestrator | | 867a2f4a3b6842d3b4dcccccfc721430 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-20 04:03:27.289252 | orchestrator | | 894750ea27a64ae3b46734f6e7679518 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-02-20 04:03:27.289280 | orchestrator | | 9868bc9b7b7544349f8481d6485440fe | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-20 04:03:27.289295 | orchestrator | | 9dacf73915e347e1968179411b162c5b | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-02-20 04:03:27.289305 | orchestrator | | bad5adb8d07349a0b85f7b07e57d6f12 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-02-20 04:03:27.289315 | orchestrator | | be2971b25c9744cc9845903a6e04bb0c | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-02-20 04:03:27.289325 | orchestrator | | c21c9cdd32284623976bf63348ab6d84 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-02-20 04:03:27.289335 | orchestrator | | cce97877c5214d0bbd101a7ac1410ef7 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-02-20 04:03:27.289345 | orchestrator | | d3a6d3148fc14d739c3b83c8d596f629 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-02-20 04:03:27.289355 | orchestrator | | d4573c739c034311b0243ce61b31944c | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-02-20 04:03:27.289364 | orchestrator | | df6efdbfa5a84b568571bdb096ec95fc | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-02-20 04:03:27.289374 | orchestrator | | e932b3d1981f483c8da21b03a121c526 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-20 04:03:27.289384 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-20 04:03:27.493606 | orchestrator | 2026-02-20 04:03:27.493774 | orchestrator | # Cinder 2026-02-20 04:03:27.493805 | orchestrator | 2026-02-20 04:03:27.493827 | orchestrator | + echo 2026-02-20 04:03:27.493842 | orchestrator | + echo '# Cinder' 2026-02-20 04:03:27.493854 | orchestrator | + echo 2026-02-20 04:03:27.493865 | orchestrator | + openstack volume service list 2026-02-20 04:03:30.051175 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-20 04:03:30.051297 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-02-20 04:03:30.051314 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-20 04:03:30.051326 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-20T04:03:22.000000 | 2026-02-20 04:03:30.051337 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-20T04:03:23.000000 | 2026-02-20 04:03:30.051349 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-20T04:03:23.000000 | 2026-02-20 04:03:30.051361 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-02-20T04:03:22.000000 | 2026-02-20 04:03:30.051381 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-02-20T04:03:21.000000 | 2026-02-20 04:03:30.051399 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-02-20T04:03:23.000000 | 2026-02-20 04:03:30.051418 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-02-20T04:03:29.000000 | 2026-02-20 04:03:30.051467 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-02-20T04:03:22.000000 | 2026-02-20 04:03:30.051489 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-02-20T04:03:22.000000 | 2026-02-20 04:03:30.051509 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-20 04:03:30.291511 | orchestrator | 2026-02-20 04:03:30.291611 | orchestrator | # Neutron 2026-02-20 04:03:30.291629 | orchestrator | 2026-02-20 04:03:30.291641 | orchestrator | + echo 2026-02-20 04:03:30.291653 | orchestrator | + echo '# Neutron' 2026-02-20 04:03:30.291665 | orchestrator | + echo 2026-02-20 04:03:30.291677 | orchestrator | + openstack network agent list 2026-02-20 04:03:32.843268 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-20 04:03:32.843381 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-02-20 04:03:32.843397 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-20 04:03:32.843408 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-02-20 04:03:32.843418 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-02-20 04:03:32.843447 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-02-20 04:03:32.843457 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-02-20 04:03:32.843467 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-02-20 04:03:32.843477 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-02-20 04:03:32.843487 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-20 04:03:32.843496 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-20 04:03:32.843506 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-20 04:03:32.843516 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-20 04:03:33.078584 | orchestrator | + openstack network service provider list 2026-02-20 04:03:35.546209 | orchestrator | +---------------+------+---------+ 2026-02-20 04:03:35.546300 | orchestrator | | Service Type | Name | Default | 2026-02-20 04:03:35.546310 | orchestrator | +---------------+------+---------+ 2026-02-20 04:03:35.546317 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-02-20 04:03:35.546324 | orchestrator | +---------------+------+---------+ 2026-02-20 04:03:35.772189 | orchestrator | 2026-02-20 04:03:35.772288 | orchestrator | # Nova 2026-02-20 04:03:35.772302 | orchestrator | 2026-02-20 04:03:35.772314 | orchestrator | + echo 2026-02-20 04:03:35.772325 | orchestrator | + echo '# Nova' 2026-02-20 04:03:35.772337 | orchestrator | + echo 2026-02-20 04:03:35.772348 | orchestrator | + openstack compute service list 2026-02-20 04:03:38.331824 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-20 04:03:38.331934 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-02-20 04:03:38.331994 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-20 04:03:38.332013 | orchestrator | | 22b73912-35c8-4013-9604-e9ed09bccc15 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-20T04:03:33.000000 | 2026-02-20 04:03:38.332030 | orchestrator | | 26f5cb55-76dd-499d-9bc5-a203b80e3cae | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-20T04:03:29.000000 | 2026-02-20 04:03:38.332046 | orchestrator | | 5e654628-a3e9-4c31-8cfe-86bdf9f1a775 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-20T04:03:31.000000 | 2026-02-20 04:03:38.332063 | orchestrator | | 1273594c-700d-4d9a-8add-04c9fed088c9 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-02-20T04:03:32.000000 | 2026-02-20 04:03:38.332079 | orchestrator | | d12e68a9-f56f-43fb-bd44-dbf7d88f0022 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-02-20T04:03:34.000000 | 2026-02-20 04:03:38.332095 | orchestrator | | 3a499074-1ddd-4327-8a50-edeecd248eaa | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-02-20T04:03:35.000000 | 2026-02-20 04:03:38.332106 | orchestrator | | f60a60d0-609b-4135-88d1-38b18b6aa654 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-02-20T04:03:33.000000 | 2026-02-20 04:03:38.332116 | orchestrator | | 7dc70d25-06a7-4194-8f1e-0d0e6cba7a58 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-02-20T04:03:34.000000 | 2026-02-20 04:03:38.332125 | orchestrator | | b13a738b-6afd-465a-827a-dd92e78f2f08 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-02-20T04:03:35.000000 | 2026-02-20 04:03:38.332135 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-20 04:03:38.550634 | orchestrator | + openstack hypervisor list 2026-02-20 04:03:41.067155 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-20 04:03:41.067277 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-02-20 04:03:41.067291 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-20 04:03:41.067300 | orchestrator | | c0b545c5-8742-4a3a-b033-2f58b6fb33a8 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-02-20 04:03:41.068051 | orchestrator | | ca32944e-2f5a-4549-81c0-e92b3c8c2f2f | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-02-20 04:03:41.068123 | orchestrator | | c6c897e0-7340-4d5e-9756-4f4a3429bf9a | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-02-20 04:03:41.068138 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-20 04:03:41.279853 | orchestrator | 2026-02-20 04:03:41.280064 | orchestrator | # Run OpenStack test play 2026-02-20 04:03:41.280086 | orchestrator | 2026-02-20 04:03:41.280098 | orchestrator | + echo 2026-02-20 04:03:41.280110 | orchestrator | + echo '# Run OpenStack test play' 2026-02-20 04:03:41.280122 | orchestrator | + echo 2026-02-20 04:03:41.280146 | orchestrator | + osism apply --environment openstack test 2026-02-20 04:03:43.167566 | orchestrator | 2026-02-20 04:03:43 | INFO  | Trying to run play test in environment openstack 2026-02-20 04:03:53.310107 | orchestrator | 2026-02-20 04:03:53 | INFO  | Task add25d97-a415-4138-944c-4ff5a10677ee (test) was prepared for execution. 2026-02-20 04:03:53.310251 | orchestrator | 2026-02-20 04:03:53 | INFO  | It takes a moment until task add25d97-a415-4138-944c-4ff5a10677ee (test) has been started and output is visible here. 2026-02-20 04:06:22.764750 | orchestrator | 2026-02-20 04:06:22.764910 | orchestrator | PLAY [Create test project] ***************************************************** 2026-02-20 04:06:22.764932 | orchestrator | 2026-02-20 04:06:22.764945 | orchestrator | TASK [Create test domain] ****************************************************** 2026-02-20 04:06:22.764957 | orchestrator | Friday 20 February 2026 04:03:57 +0000 (0:00:00.067) 0:00:00.067 ******* 2026-02-20 04:06:22.764969 | orchestrator | changed: [localhost] 2026-02-20 04:06:22.764981 | orchestrator | 2026-02-20 04:06:22.765018 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-02-20 04:06:22.765030 | orchestrator | Friday 20 February 2026 04:04:00 +0000 (0:00:03.634) 0:00:03.702 ******* 2026-02-20 04:06:22.765041 | orchestrator | changed: [localhost] 2026-02-20 04:06:22.765052 | orchestrator | 2026-02-20 04:06:22.765063 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-02-20 04:06:22.765074 | orchestrator | Friday 20 February 2026 04:04:05 +0000 (0:00:04.060) 0:00:07.762 ******* 2026-02-20 04:06:22.765085 | orchestrator | changed: [localhost] 2026-02-20 04:06:22.765096 | orchestrator | 2026-02-20 04:06:22.765107 | orchestrator | TASK [Create test project] ***************************************************** 2026-02-20 04:06:22.765118 | orchestrator | Friday 20 February 2026 04:04:11 +0000 (0:00:06.381) 0:00:14.144 ******* 2026-02-20 04:06:22.765129 | orchestrator | changed: [localhost] 2026-02-20 04:06:22.765139 | orchestrator | 2026-02-20 04:06:22.765150 | orchestrator | TASK [Create test user] ******************************************************** 2026-02-20 04:06:22.765161 | orchestrator | Friday 20 February 2026 04:04:15 +0000 (0:00:03.839) 0:00:17.983 ******* 2026-02-20 04:06:22.765173 | orchestrator | changed: [localhost] 2026-02-20 04:06:22.765184 | orchestrator | 2026-02-20 04:06:22.765195 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-02-20 04:06:22.765206 | orchestrator | Friday 20 February 2026 04:04:19 +0000 (0:00:03.981) 0:00:21.964 ******* 2026-02-20 04:06:22.765217 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-02-20 04:06:22.765228 | orchestrator | changed: [localhost] => (item=member) 2026-02-20 04:06:22.765240 | orchestrator | changed: [localhost] => (item=creator) 2026-02-20 04:06:22.765251 | orchestrator | 2026-02-20 04:06:22.765262 | orchestrator | TASK [Create test server group] ************************************************ 2026-02-20 04:06:22.765273 | orchestrator | Friday 20 February 2026 04:04:30 +0000 (0:00:10.903) 0:00:32.868 ******* 2026-02-20 04:06:22.765284 | orchestrator | changed: [localhost] 2026-02-20 04:06:22.765295 | orchestrator | 2026-02-20 04:06:22.765306 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-02-20 04:06:22.765317 | orchestrator | Friday 20 February 2026 04:04:33 +0000 (0:00:03.763) 0:00:36.632 ******* 2026-02-20 04:06:22.765327 | orchestrator | changed: [localhost] 2026-02-20 04:06:22.765338 | orchestrator | 2026-02-20 04:06:22.765349 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-02-20 04:06:22.765360 | orchestrator | Friday 20 February 2026 04:04:38 +0000 (0:00:04.448) 0:00:41.081 ******* 2026-02-20 04:06:22.765370 | orchestrator | changed: [localhost] 2026-02-20 04:06:22.765381 | orchestrator | 2026-02-20 04:06:22.765393 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-02-20 04:06:22.765403 | orchestrator | Friday 20 February 2026 04:04:42 +0000 (0:00:04.051) 0:00:45.132 ******* 2026-02-20 04:06:22.765414 | orchestrator | changed: [localhost] 2026-02-20 04:06:22.765425 | orchestrator | 2026-02-20 04:06:22.765436 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-02-20 04:06:22.765447 | orchestrator | Friday 20 February 2026 04:04:46 +0000 (0:00:03.741) 0:00:48.873 ******* 2026-02-20 04:06:22.765458 | orchestrator | changed: [localhost] 2026-02-20 04:06:22.765469 | orchestrator | 2026-02-20 04:06:22.765480 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-02-20 04:06:22.765491 | orchestrator | Friday 20 February 2026 04:04:49 +0000 (0:00:03.877) 0:00:52.751 ******* 2026-02-20 04:06:22.765502 | orchestrator | changed: [localhost] 2026-02-20 04:06:22.765512 | orchestrator | 2026-02-20 04:06:22.765524 | orchestrator | TASK [Create test network] ***************************************************** 2026-02-20 04:06:22.765535 | orchestrator | Friday 20 February 2026 04:04:53 +0000 (0:00:03.910) 0:00:56.662 ******* 2026-02-20 04:06:22.765546 | orchestrator | changed: [localhost] 2026-02-20 04:06:22.765557 | orchestrator | 2026-02-20 04:06:22.765568 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-02-20 04:06:22.765578 | orchestrator | Friday 20 February 2026 04:04:58 +0000 (0:00:04.694) 0:01:01.357 ******* 2026-02-20 04:06:22.765597 | orchestrator | changed: [localhost] 2026-02-20 04:06:22.765608 | orchestrator | 2026-02-20 04:06:22.765618 | orchestrator | TASK [Create test router] ****************************************************** 2026-02-20 04:06:22.765629 | orchestrator | Friday 20 February 2026 04:05:03 +0000 (0:00:05.157) 0:01:06.514 ******* 2026-02-20 04:06:22.765664 | orchestrator | changed: [localhost] 2026-02-20 04:06:22.765675 | orchestrator | 2026-02-20 04:06:22.765686 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-02-20 04:06:22.765697 | orchestrator | 2026-02-20 04:06:22.765708 | orchestrator | TASK [Get test server group] *************************************************** 2026-02-20 04:06:22.765719 | orchestrator | Friday 20 February 2026 04:05:14 +0000 (0:00:11.121) 0:01:17.636 ******* 2026-02-20 04:06:22.765730 | orchestrator | ok: [localhost] 2026-02-20 04:06:22.765741 | orchestrator | 2026-02-20 04:06:22.765752 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-02-20 04:06:22.765763 | orchestrator | Friday 20 February 2026 04:05:18 +0000 (0:00:03.422) 0:01:21.058 ******* 2026-02-20 04:06:22.765774 | orchestrator | skipping: [localhost] 2026-02-20 04:06:22.765785 | orchestrator | 2026-02-20 04:06:22.765796 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-02-20 04:06:22.765808 | orchestrator | Friday 20 February 2026 04:05:18 +0000 (0:00:00.039) 0:01:21.098 ******* 2026-02-20 04:06:22.765836 | orchestrator | skipping: [localhost] 2026-02-20 04:06:22.765847 | orchestrator | 2026-02-20 04:06:22.765858 | orchestrator | TASK [Delete test instances] *************************************************** 2026-02-20 04:06:22.765869 | orchestrator | Friday 20 February 2026 04:05:18 +0000 (0:00:00.037) 0:01:21.135 ******* 2026-02-20 04:06:22.765881 | orchestrator | skipping: [localhost] => (item=test-4)  2026-02-20 04:06:22.765892 | orchestrator | skipping: [localhost] => (item=test-3)  2026-02-20 04:06:22.765923 | orchestrator | skipping: [localhost] => (item=test-2)  2026-02-20 04:06:22.765935 | orchestrator | skipping: [localhost] => (item=test-1)  2026-02-20 04:06:22.765946 | orchestrator | skipping: [localhost] => (item=test)  2026-02-20 04:06:22.765957 | orchestrator | skipping: [localhost] 2026-02-20 04:06:22.765968 | orchestrator | 2026-02-20 04:06:22.765979 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-02-20 04:06:22.765990 | orchestrator | Friday 20 February 2026 04:05:18 +0000 (0:00:00.143) 0:01:21.279 ******* 2026-02-20 04:06:22.766001 | orchestrator | skipping: [localhost] 2026-02-20 04:06:22.766065 | orchestrator | 2026-02-20 04:06:22.766081 | orchestrator | TASK [Create test instances] *************************************************** 2026-02-20 04:06:22.766092 | orchestrator | Friday 20 February 2026 04:05:18 +0000 (0:00:00.152) 0:01:21.431 ******* 2026-02-20 04:06:22.766103 | orchestrator | changed: [localhost] => (item=test) 2026-02-20 04:06:22.766114 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-20 04:06:22.766125 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-20 04:06:22.766136 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-20 04:06:22.766147 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-20 04:06:22.766157 | orchestrator | 2026-02-20 04:06:22.766169 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-02-20 04:06:22.766180 | orchestrator | Friday 20 February 2026 04:05:23 +0000 (0:00:04.503) 0:01:25.935 ******* 2026-02-20 04:06:22.766191 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-20 04:06:22.766203 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-02-20 04:06:22.766214 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-02-20 04:06:22.766225 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-02-20 04:06:22.766238 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j197350288545.3697', 'results_file': '/ansible/.ansible_async/j197350288545.3697', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-20 04:06:22.766260 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j105544746295.3722', 'results_file': '/ansible/.ansible_async/j105544746295.3722', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-20 04:06:22.766272 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j675448476882.3747', 'results_file': '/ansible/.ansible_async/j675448476882.3747', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-20 04:06:22.766283 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j268787772153.3772', 'results_file': '/ansible/.ansible_async/j268787772153.3772', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-20 04:06:22.766294 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j107245310692.3804', 'results_file': '/ansible/.ansible_async/j107245310692.3804', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-20 04:06:22.766305 | orchestrator | 2026-02-20 04:06:22.766317 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-02-20 04:06:22.766328 | orchestrator | Friday 20 February 2026 04:06:09 +0000 (0:00:46.227) 0:02:12.163 ******* 2026-02-20 04:06:22.766339 | orchestrator | changed: [localhost] => (item=test) 2026-02-20 04:06:22.766350 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-20 04:06:22.766361 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-20 04:06:22.766372 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-20 04:06:22.766383 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-20 04:06:22.766393 | orchestrator | 2026-02-20 04:06:22.766404 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-02-20 04:06:22.766415 | orchestrator | Friday 20 February 2026 04:06:13 +0000 (0:00:04.235) 0:02:16.398 ******* 2026-02-20 04:06:22.766427 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-02-20 04:06:22.766438 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j638214795574.3901', 'results_file': '/ansible/.ansible_async/j638214795574.3901', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-20 04:06:22.766450 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j1644478889.3926', 'results_file': '/ansible/.ansible_async/j1644478889.3926', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-20 04:06:22.766462 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j146447454438.3951', 'results_file': '/ansible/.ansible_async/j146447454438.3951', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-20 04:06:22.766490 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j597328158199.3976', 'results_file': '/ansible/.ansible_async/j597328158199.3976', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-20 04:07:00.048402 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j787271754950.4001', 'results_file': '/ansible/.ansible_async/j787271754950.4001', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-20 04:07:00.048528 | orchestrator | 2026-02-20 04:07:00.048547 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-02-20 04:07:00.048561 | orchestrator | Friday 20 February 2026 04:06:22 +0000 (0:00:09.113) 0:02:25.512 ******* 2026-02-20 04:07:00.048574 | orchestrator | changed: [localhost] => (item=test) 2026-02-20 04:07:00.048586 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-20 04:07:00.048597 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-20 04:07:00.048609 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-20 04:07:00.048698 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-20 04:07:00.048712 | orchestrator | 2026-02-20 04:07:00.048725 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-02-20 04:07:00.048736 | orchestrator | Friday 20 February 2026 04:06:26 +0000 (0:00:04.029) 0:02:29.541 ******* 2026-02-20 04:07:00.048747 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-02-20 04:07:00.048760 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j775541516390.4077', 'results_file': '/ansible/.ansible_async/j775541516390.4077', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-20 04:07:00.048772 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j745858082814.4102', 'results_file': '/ansible/.ansible_async/j745858082814.4102', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-20 04:07:00.048796 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j804073185143.4128', 'results_file': '/ansible/.ansible_async/j804073185143.4128', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-20 04:07:00.048808 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j533399770877.4154', 'results_file': '/ansible/.ansible_async/j533399770877.4154', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-20 04:07:00.048831 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j747751351914.4180', 'results_file': '/ansible/.ansible_async/j747751351914.4180', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-20 04:07:00.048842 | orchestrator | 2026-02-20 04:07:00.048853 | orchestrator | TASK [Create test volume] ****************************************************** 2026-02-20 04:07:00.048864 | orchestrator | Friday 20 February 2026 04:06:35 +0000 (0:00:08.777) 0:02:38.318 ******* 2026-02-20 04:07:00.048876 | orchestrator | changed: [localhost] 2026-02-20 04:07:00.048887 | orchestrator | 2026-02-20 04:07:00.048898 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-02-20 04:07:00.048909 | orchestrator | Friday 20 February 2026 04:06:41 +0000 (0:00:05.806) 0:02:44.125 ******* 2026-02-20 04:07:00.048920 | orchestrator | changed: [localhost] 2026-02-20 04:07:00.048931 | orchestrator | 2026-02-20 04:07:00.048945 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-02-20 04:07:00.048959 | orchestrator | Friday 20 February 2026 04:06:54 +0000 (0:00:13.560) 0:02:57.685 ******* 2026-02-20 04:07:00.048973 | orchestrator | ok: [localhost] 2026-02-20 04:07:00.048987 | orchestrator | 2026-02-20 04:07:00.049001 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-02-20 04:07:00.049014 | orchestrator | Friday 20 February 2026 04:06:59 +0000 (0:00:04.850) 0:03:02.535 ******* 2026-02-20 04:07:00.049027 | orchestrator | ok: [localhost] => { 2026-02-20 04:07:00.049041 | orchestrator |  "msg": "192.168.112.195" 2026-02-20 04:07:00.049055 | orchestrator | } 2026-02-20 04:07:00.049070 | orchestrator | 2026-02-20 04:07:00.049083 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:07:00.049098 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-20 04:07:00.049112 | orchestrator | 2026-02-20 04:07:00.049126 | orchestrator | 2026-02-20 04:07:00.049139 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 04:07:00.049153 | orchestrator | Friday 20 February 2026 04:06:59 +0000 (0:00:00.048) 0:03:02.583 ******* 2026-02-20 04:07:00.049167 | orchestrator | =============================================================================== 2026-02-20 04:07:00.049181 | orchestrator | Wait for instance creation to complete --------------------------------- 46.23s 2026-02-20 04:07:00.049193 | orchestrator | Attach test volume ----------------------------------------------------- 13.56s 2026-02-20 04:07:00.049226 | orchestrator | Create test router ----------------------------------------------------- 11.12s 2026-02-20 04:07:00.049237 | orchestrator | Add member roles to user test ------------------------------------------ 10.90s 2026-02-20 04:07:00.049248 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.11s 2026-02-20 04:07:00.049260 | orchestrator | Wait for tags to be added ----------------------------------------------- 8.78s 2026-02-20 04:07:00.049271 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.38s 2026-02-20 04:07:00.049300 | orchestrator | Create test volume ------------------------------------------------------ 5.81s 2026-02-20 04:07:00.049312 | orchestrator | Create test subnet ------------------------------------------------------ 5.16s 2026-02-20 04:07:00.049323 | orchestrator | Create floating ip address ---------------------------------------------- 4.85s 2026-02-20 04:07:00.049334 | orchestrator | Create test network ----------------------------------------------------- 4.69s 2026-02-20 04:07:00.049345 | orchestrator | Create test instances --------------------------------------------------- 4.50s 2026-02-20 04:07:00.049356 | orchestrator | Create ssh security group ----------------------------------------------- 4.45s 2026-02-20 04:07:00.049367 | orchestrator | Add metadata to instances ----------------------------------------------- 4.24s 2026-02-20 04:07:00.049378 | orchestrator | Create test-admin user -------------------------------------------------- 4.06s 2026-02-20 04:07:00.049389 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.05s 2026-02-20 04:07:00.049400 | orchestrator | Add tag to instances ---------------------------------------------------- 4.03s 2026-02-20 04:07:00.049412 | orchestrator | Create test user -------------------------------------------------------- 3.98s 2026-02-20 04:07:00.049423 | orchestrator | Create test keypair ----------------------------------------------------- 3.91s 2026-02-20 04:07:00.049434 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.88s 2026-02-20 04:07:00.354401 | orchestrator | + server_list 2026-02-20 04:07:00.354520 | orchestrator | + openstack --os-cloud test server list 2026-02-20 04:07:03.974901 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-20 04:07:03.975002 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-02-20 04:07:03.975018 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-20 04:07:03.975030 | orchestrator | | 5e6d41ad-2b64-49c6-a5ea-db5d59894a82 | test-4 | ACTIVE | test=192.168.112.169, 192.168.200.19 | N/A (booted from volume) | SCS-1L-1 | 2026-02-20 04:07:03.975041 | orchestrator | | 12a7c7a5-5751-40e9-82b8-8366c352273e | test-3 | ACTIVE | test=192.168.112.143, 192.168.200.38 | N/A (booted from volume) | SCS-1L-1 | 2026-02-20 04:07:03.975052 | orchestrator | | 833f44e9-30cd-427b-962e-4549f9d7be9b | test | ACTIVE | test=192.168.112.195, 192.168.200.164 | N/A (booted from volume) | SCS-1L-1 | 2026-02-20 04:07:03.975063 | orchestrator | | 9fa5ce14-766a-4bac-b762-6fdf6cdc29bc | test-2 | ACTIVE | test=192.168.112.115, 192.168.200.60 | N/A (booted from volume) | SCS-1L-1 | 2026-02-20 04:07:03.975073 | orchestrator | | a9771123-51fb-412d-ae05-996c2a5c1bd5 | test-1 | ACTIVE | test=192.168.112.191, 192.168.200.34 | N/A (booted from volume) | SCS-1L-1 | 2026-02-20 04:07:03.975084 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-20 04:07:04.211553 | orchestrator | + openstack --os-cloud test server show test 2026-02-20 04:07:07.347173 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-20 04:07:07.347285 | orchestrator | | Field | Value | 2026-02-20 04:07:07.347296 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-20 04:07:07.347308 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-20 04:07:07.347316 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-20 04:07:07.347324 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-20 04:07:07.347331 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-02-20 04:07:07.347339 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-20 04:07:07.347346 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-20 04:07:07.347367 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-20 04:07:07.347376 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-20 04:07:07.347388 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-20 04:07:07.347396 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-20 04:07:07.347407 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-20 04:07:07.347414 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-20 04:07:07.347422 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-20 04:07:07.347429 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-20 04:07:07.347437 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-20 04:07:07.347445 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-20T04:05:54.000000 | 2026-02-20 04:07:07.347483 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-20 04:07:07.347505 | orchestrator | | accessIPv4 | | 2026-02-20 04:07:07.347513 | orchestrator | | accessIPv6 | | 2026-02-20 04:07:07.347521 | orchestrator | | addresses | test=192.168.112.195, 192.168.200.164 | 2026-02-20 04:07:07.347532 | orchestrator | | config_drive | | 2026-02-20 04:07:07.347540 | orchestrator | | created | 2026-02-20T04:05:28Z | 2026-02-20 04:07:07.347547 | orchestrator | | description | None | 2026-02-20 04:07:07.347555 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-20 04:07:07.347563 | orchestrator | | hostId | fe991e687391f895d597e41e5484b5dec78d4087e632f864e0949719 | 2026-02-20 04:07:07.347570 | orchestrator | | host_status | None | 2026-02-20 04:07:07.347588 | orchestrator | | id | 833f44e9-30cd-427b-962e-4549f9d7be9b | 2026-02-20 04:07:07.347595 | orchestrator | | image | N/A (booted from volume) | 2026-02-20 04:07:07.347603 | orchestrator | | key_name | test | 2026-02-20 04:07:07.347611 | orchestrator | | locked | False | 2026-02-20 04:07:07.347637 | orchestrator | | locked_reason | None | 2026-02-20 04:07:07.347645 | orchestrator | | name | test | 2026-02-20 04:07:07.347653 | orchestrator | | pinned_availability_zone | None | 2026-02-20 04:07:07.347660 | orchestrator | | progress | 0 | 2026-02-20 04:07:07.347668 | orchestrator | | project_id | 18057c29a2fe4d8b95eb23f13a7497e3 | 2026-02-20 04:07:07.347680 | orchestrator | | properties | hostname='test' | 2026-02-20 04:07:07.347701 | orchestrator | | security_groups | name='icmp' | 2026-02-20 04:07:07.347711 | orchestrator | | | name='ssh' | 2026-02-20 04:07:07.347721 | orchestrator | | server_groups | None | 2026-02-20 04:07:07.347733 | orchestrator | | status | ACTIVE | 2026-02-20 04:07:07.347742 | orchestrator | | tags | test | 2026-02-20 04:07:07.347756 | orchestrator | | trusted_image_certificates | None | 2026-02-20 04:07:07.347768 | orchestrator | | updated | 2026-02-20T04:06:15Z | 2026-02-20 04:07:07.347780 | orchestrator | | user_id | 45f84249ecc94c60ae3a5284d541798b | 2026-02-20 04:07:07.347793 | orchestrator | | volumes_attached | delete_on_termination='True', id='0f34d400-d659-4304-8b36-fb3bd42ed51d' | 2026-02-20 04:07:07.347815 | orchestrator | | | delete_on_termination='False', id='17f8281a-addd-499b-bfec-53ef46995858' | 2026-02-20 04:07:07.350196 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-20 04:07:07.566400 | orchestrator | + openstack --os-cloud test server show test-1 2026-02-20 04:07:10.549420 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-20 04:07:10.549545 | orchestrator | | Field | Value | 2026-02-20 04:07:10.549567 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-20 04:07:10.549578 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-20 04:07:10.549587 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-20 04:07:10.549597 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-20 04:07:10.549606 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-02-20 04:07:10.549663 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-20 04:07:10.549674 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-20 04:07:10.549699 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-20 04:07:10.549709 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-20 04:07:10.549718 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-20 04:07:10.549731 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-20 04:07:10.549741 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-20 04:07:10.549750 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-20 04:07:10.549759 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-20 04:07:10.549774 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-20 04:07:10.549783 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-20 04:07:10.549792 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-20T04:05:54.000000 | 2026-02-20 04:07:10.549808 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-20 04:07:10.549818 | orchestrator | | accessIPv4 | | 2026-02-20 04:07:10.549827 | orchestrator | | accessIPv6 | | 2026-02-20 04:07:10.549839 | orchestrator | | addresses | test=192.168.112.191, 192.168.200.34 | 2026-02-20 04:07:10.549849 | orchestrator | | config_drive | | 2026-02-20 04:07:10.549858 | orchestrator | | created | 2026-02-20T04:05:28Z | 2026-02-20 04:07:10.549872 | orchestrator | | description | None | 2026-02-20 04:07:10.549881 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-20 04:07:10.549890 | orchestrator | | hostId | eeee11027ba11ec327c8f48a96485149f3ad18202141ff38fa6f63fc | 2026-02-20 04:07:10.549899 | orchestrator | | host_status | None | 2026-02-20 04:07:10.549914 | orchestrator | | id | a9771123-51fb-412d-ae05-996c2a5c1bd5 | 2026-02-20 04:07:10.549925 | orchestrator | | image | N/A (booted from volume) | 2026-02-20 04:07:10.549938 | orchestrator | | key_name | test | 2026-02-20 04:07:10.549959 | orchestrator | | locked | False | 2026-02-20 04:07:10.549975 | orchestrator | | locked_reason | None | 2026-02-20 04:07:10.549998 | orchestrator | | name | test-1 | 2026-02-20 04:07:10.550014 | orchestrator | | pinned_availability_zone | None | 2026-02-20 04:07:10.550112 | orchestrator | | progress | 0 | 2026-02-20 04:07:10.550123 | orchestrator | | project_id | 18057c29a2fe4d8b95eb23f13a7497e3 | 2026-02-20 04:07:10.550134 | orchestrator | | properties | hostname='test-1' | 2026-02-20 04:07:10.550153 | orchestrator | | security_groups | name='icmp' | 2026-02-20 04:07:10.550193 | orchestrator | | | name='ssh' | 2026-02-20 04:07:10.550203 | orchestrator | | server_groups | None | 2026-02-20 04:07:10.550212 | orchestrator | | status | ACTIVE | 2026-02-20 04:07:10.550222 | orchestrator | | tags | test | 2026-02-20 04:07:10.550239 | orchestrator | | trusted_image_certificates | None | 2026-02-20 04:07:10.550248 | orchestrator | | updated | 2026-02-20T04:06:15Z | 2026-02-20 04:07:10.550257 | orchestrator | | user_id | 45f84249ecc94c60ae3a5284d541798b | 2026-02-20 04:07:10.550266 | orchestrator | | volumes_attached | delete_on_termination='True', id='39810a66-9908-4875-a6f1-23056d28f7d4' | 2026-02-20 04:07:10.552583 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-20 04:07:10.786400 | orchestrator | + openstack --os-cloud test server show test-2 2026-02-20 04:07:13.724543 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-20 04:07:13.724712 | orchestrator | | Field | Value | 2026-02-20 04:07:13.724773 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-20 04:07:13.724797 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-20 04:07:13.724855 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-20 04:07:13.724877 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-20 04:07:13.724896 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-02-20 04:07:13.724915 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-20 04:07:13.724933 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-20 04:07:13.724976 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-20 04:07:13.724995 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-20 04:07:13.725015 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-20 04:07:13.725044 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-20 04:07:13.725079 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-20 04:07:13.725100 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-20 04:07:13.725119 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-20 04:07:13.725137 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-20 04:07:13.725148 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-20 04:07:13.725160 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-20T04:05:56.000000 | 2026-02-20 04:07:13.725180 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-20 04:07:13.725192 | orchestrator | | accessIPv4 | | 2026-02-20 04:07:13.725204 | orchestrator | | accessIPv6 | | 2026-02-20 04:07:13.725228 | orchestrator | | addresses | test=192.168.112.115, 192.168.200.60 | 2026-02-20 04:07:13.725239 | orchestrator | | config_drive | | 2026-02-20 04:07:13.725251 | orchestrator | | created | 2026-02-20T04:05:28Z | 2026-02-20 04:07:13.725262 | orchestrator | | description | None | 2026-02-20 04:07:13.725274 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-20 04:07:13.725285 | orchestrator | | hostId | fe991e687391f895d597e41e5484b5dec78d4087e632f864e0949719 | 2026-02-20 04:07:13.725296 | orchestrator | | host_status | None | 2026-02-20 04:07:13.725315 | orchestrator | | id | 9fa5ce14-766a-4bac-b762-6fdf6cdc29bc | 2026-02-20 04:07:13.725327 | orchestrator | | image | N/A (booted from volume) | 2026-02-20 04:07:13.725346 | orchestrator | | key_name | test | 2026-02-20 04:07:13.725362 | orchestrator | | locked | False | 2026-02-20 04:07:13.725374 | orchestrator | | locked_reason | None | 2026-02-20 04:07:13.725385 | orchestrator | | name | test-2 | 2026-02-20 04:07:13.725397 | orchestrator | | pinned_availability_zone | None | 2026-02-20 04:07:13.725408 | orchestrator | | progress | 0 | 2026-02-20 04:07:13.725419 | orchestrator | | project_id | 18057c29a2fe4d8b95eb23f13a7497e3 | 2026-02-20 04:07:13.725431 | orchestrator | | properties | hostname='test-2' | 2026-02-20 04:07:13.725449 | orchestrator | | security_groups | name='icmp' | 2026-02-20 04:07:13.725461 | orchestrator | | | name='ssh' | 2026-02-20 04:07:13.725478 | orchestrator | | server_groups | None | 2026-02-20 04:07:13.725494 | orchestrator | | status | ACTIVE | 2026-02-20 04:07:13.725506 | orchestrator | | tags | test | 2026-02-20 04:07:13.725517 | orchestrator | | trusted_image_certificates | None | 2026-02-20 04:07:13.725529 | orchestrator | | updated | 2026-02-20T04:06:16Z | 2026-02-20 04:07:13.725540 | orchestrator | | user_id | 45f84249ecc94c60ae3a5284d541798b | 2026-02-20 04:07:13.725551 | orchestrator | | volumes_attached | delete_on_termination='True', id='0158db46-0e9e-4ba9-8951-ac097f3ed905' | 2026-02-20 04:07:13.729831 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-20 04:07:14.036732 | orchestrator | + openstack --os-cloud test server show test-3 2026-02-20 04:07:17.061820 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-20 04:07:17.061950 | orchestrator | | Field | Value | 2026-02-20 04:07:17.061967 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-20 04:07:17.061995 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-20 04:07:17.062007 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-20 04:07:17.062080 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-20 04:07:17.062095 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-02-20 04:07:17.062108 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-20 04:07:17.062120 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-20 04:07:17.062154 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-20 04:07:17.062176 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-20 04:07:17.062189 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-20 04:07:17.062202 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-20 04:07:17.062220 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-20 04:07:17.062232 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-20 04:07:17.062245 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-20 04:07:17.062257 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-20 04:07:17.062270 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-20 04:07:17.062282 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-20T04:05:54.000000 | 2026-02-20 04:07:17.062310 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-20 04:07:17.062322 | orchestrator | | accessIPv4 | | 2026-02-20 04:07:17.062335 | orchestrator | | accessIPv6 | | 2026-02-20 04:07:17.062347 | orchestrator | | addresses | test=192.168.112.143, 192.168.200.38 | 2026-02-20 04:07:17.062836 | orchestrator | | config_drive | | 2026-02-20 04:07:17.062857 | orchestrator | | created | 2026-02-20T04:05:31Z | 2026-02-20 04:07:17.062871 | orchestrator | | description | None | 2026-02-20 04:07:17.062883 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-20 04:07:17.062896 | orchestrator | | hostId | eeee11027ba11ec327c8f48a96485149f3ad18202141ff38fa6f63fc | 2026-02-20 04:07:17.062916 | orchestrator | | host_status | None | 2026-02-20 04:07:17.062938 | orchestrator | | id | 12a7c7a5-5751-40e9-82b8-8366c352273e | 2026-02-20 04:07:17.062955 | orchestrator | | image | N/A (booted from volume) | 2026-02-20 04:07:17.062967 | orchestrator | | key_name | test | 2026-02-20 04:07:17.062979 | orchestrator | | locked | False | 2026-02-20 04:07:17.062991 | orchestrator | | locked_reason | None | 2026-02-20 04:07:17.063002 | orchestrator | | name | test-3 | 2026-02-20 04:07:17.063015 | orchestrator | | pinned_availability_zone | None | 2026-02-20 04:07:17.063026 | orchestrator | | progress | 0 | 2026-02-20 04:07:17.063038 | orchestrator | | project_id | 18057c29a2fe4d8b95eb23f13a7497e3 | 2026-02-20 04:07:17.063056 | orchestrator | | properties | hostname='test-3' | 2026-02-20 04:07:17.063078 | orchestrator | | security_groups | name='icmp' | 2026-02-20 04:07:17.063094 | orchestrator | | | name='ssh' | 2026-02-20 04:07:17.063107 | orchestrator | | server_groups | None | 2026-02-20 04:07:17.063119 | orchestrator | | status | ACTIVE | 2026-02-20 04:07:17.063130 | orchestrator | | tags | test | 2026-02-20 04:07:17.063142 | orchestrator | | trusted_image_certificates | None | 2026-02-20 04:07:17.063153 | orchestrator | | updated | 2026-02-20T04:06:16Z | 2026-02-20 04:07:17.063165 | orchestrator | | user_id | 45f84249ecc94c60ae3a5284d541798b | 2026-02-20 04:07:17.063186 | orchestrator | | volumes_attached | delete_on_termination='True', id='8d777d7c-7e52-405b-808b-71efe36616ae' | 2026-02-20 04:07:17.067687 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-20 04:07:17.291810 | orchestrator | + openstack --os-cloud test server show test-4 2026-02-20 04:07:20.177534 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-20 04:07:20.177705 | orchestrator | | Field | Value | 2026-02-20 04:07:20.177728 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-20 04:07:20.177744 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-20 04:07:20.177759 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-20 04:07:20.177774 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-20 04:07:20.177788 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-02-20 04:07:20.177833 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-20 04:07:20.177851 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-20 04:07:20.177886 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-20 04:07:20.177908 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-20 04:07:20.177922 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-20 04:07:20.177937 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-20 04:07:20.177950 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-20 04:07:20.177964 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-20 04:07:20.177978 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-20 04:07:20.178002 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-20 04:07:20.178070 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-20 04:07:20.178084 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-20T04:05:55.000000 | 2026-02-20 04:07:20.178102 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-20 04:07:20.178118 | orchestrator | | accessIPv4 | | 2026-02-20 04:07:20.178127 | orchestrator | | accessIPv6 | | 2026-02-20 04:07:20.178135 | orchestrator | | addresses | test=192.168.112.169, 192.168.200.19 | 2026-02-20 04:07:20.178143 | orchestrator | | config_drive | | 2026-02-20 04:07:20.178152 | orchestrator | | created | 2026-02-20T04:05:32Z | 2026-02-20 04:07:20.178160 | orchestrator | | description | None | 2026-02-20 04:07:20.178175 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-20 04:07:20.178183 | orchestrator | | hostId | eeee11027ba11ec327c8f48a96485149f3ad18202141ff38fa6f63fc | 2026-02-20 04:07:20.178192 | orchestrator | | host_status | None | 2026-02-20 04:07:20.178207 | orchestrator | | id | 5e6d41ad-2b64-49c6-a5ea-db5d59894a82 | 2026-02-20 04:07:20.178219 | orchestrator | | image | N/A (booted from volume) | 2026-02-20 04:07:20.178228 | orchestrator | | key_name | test | 2026-02-20 04:07:20.178236 | orchestrator | | locked | False | 2026-02-20 04:07:20.178244 | orchestrator | | locked_reason | None | 2026-02-20 04:07:20.178252 | orchestrator | | name | test-4 | 2026-02-20 04:07:20.178269 | orchestrator | | pinned_availability_zone | None | 2026-02-20 04:07:20.178283 | orchestrator | | progress | 0 | 2026-02-20 04:07:20.178297 | orchestrator | | project_id | 18057c29a2fe4d8b95eb23f13a7497e3 | 2026-02-20 04:07:20.178311 | orchestrator | | properties | hostname='test-4' | 2026-02-20 04:07:20.178332 | orchestrator | | security_groups | name='icmp' | 2026-02-20 04:07:20.178352 | orchestrator | | | name='ssh' | 2026-02-20 04:07:20.178365 | orchestrator | | server_groups | None | 2026-02-20 04:07:20.178379 | orchestrator | | status | ACTIVE | 2026-02-20 04:07:20.178392 | orchestrator | | tags | test | 2026-02-20 04:07:20.178412 | orchestrator | | trusted_image_certificates | None | 2026-02-20 04:07:20.178425 | orchestrator | | updated | 2026-02-20T04:06:17Z | 2026-02-20 04:07:20.178437 | orchestrator | | user_id | 45f84249ecc94c60ae3a5284d541798b | 2026-02-20 04:07:20.178451 | orchestrator | | volumes_attached | delete_on_termination='True', id='3aa8d08a-4315-4ca7-b08e-8729d0d95aa7' | 2026-02-20 04:07:20.181822 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-20 04:07:20.421083 | orchestrator | + server_ping 2026-02-20 04:07:20.422783 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-20 04:07:20.422845 | orchestrator | ++ tr -d '\r' 2026-02-20 04:07:23.179114 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-20 04:07:23.179212 | orchestrator | + ping -c3 192.168.112.169 2026-02-20 04:07:23.189693 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2026-02-20 04:07:23.189771 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=5.95 ms 2026-02-20 04:07:24.188038 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=3.22 ms 2026-02-20 04:07:25.188959 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=1.61 ms 2026-02-20 04:07:25.189056 | orchestrator | 2026-02-20 04:07:25.189069 | orchestrator | --- 192.168.112.169 ping statistics --- 2026-02-20 04:07:25.189081 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-02-20 04:07:25.189091 | orchestrator | rtt min/avg/max/mdev = 1.605/3.592/5.950/1.793 ms 2026-02-20 04:07:25.189933 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-20 04:07:25.189956 | orchestrator | + ping -c3 192.168.112.115 2026-02-20 04:07:25.201214 | orchestrator | PING 192.168.112.115 (192.168.112.115) 56(84) bytes of data. 2026-02-20 04:07:25.201298 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=1 ttl=63 time=6.44 ms 2026-02-20 04:07:26.199838 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=2 ttl=63 time=3.09 ms 2026-02-20 04:07:27.200181 | orchestrator | 64 bytes from 192.168.112.115: icmp_seq=3 ttl=63 time=1.99 ms 2026-02-20 04:07:27.200305 | orchestrator | 2026-02-20 04:07:27.200558 | orchestrator | --- 192.168.112.115 ping statistics --- 2026-02-20 04:07:27.200580 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-20 04:07:27.200591 | orchestrator | rtt min/avg/max/mdev = 1.989/3.839/6.441/1.893 ms 2026-02-20 04:07:27.200688 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-20 04:07:27.200702 | orchestrator | + ping -c3 192.168.112.191 2026-02-20 04:07:27.214270 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2026-02-20 04:07:27.214369 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=8.83 ms 2026-02-20 04:07:28.210108 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.42 ms 2026-02-20 04:07:29.211837 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=1.87 ms 2026-02-20 04:07:29.211951 | orchestrator | 2026-02-20 04:07:29.211969 | orchestrator | --- 192.168.112.191 ping statistics --- 2026-02-20 04:07:29.211983 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-20 04:07:29.212077 | orchestrator | rtt min/avg/max/mdev = 1.867/4.369/8.826/3.159 ms 2026-02-20 04:07:29.212097 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-20 04:07:29.212110 | orchestrator | + ping -c3 192.168.112.143 2026-02-20 04:07:29.223911 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2026-02-20 04:07:29.223985 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=7.97 ms 2026-02-20 04:07:30.220035 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=2.68 ms 2026-02-20 04:07:31.221367 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=2.13 ms 2026-02-20 04:07:31.221508 | orchestrator | 2026-02-20 04:07:31.221535 | orchestrator | --- 192.168.112.143 ping statistics --- 2026-02-20 04:07:31.221554 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-20 04:07:31.221840 | orchestrator | rtt min/avg/max/mdev = 2.134/4.258/7.965/2.630 ms 2026-02-20 04:07:31.222421 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-20 04:07:31.222442 | orchestrator | + ping -c3 192.168.112.195 2026-02-20 04:07:31.234319 | orchestrator | PING 192.168.112.195 (192.168.112.195) 56(84) bytes of data. 2026-02-20 04:07:31.234420 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=1 ttl=63 time=7.73 ms 2026-02-20 04:07:32.231279 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=2 ttl=63 time=2.60 ms 2026-02-20 04:07:33.232812 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=3 ttl=63 time=1.70 ms 2026-02-20 04:07:33.232910 | orchestrator | 2026-02-20 04:07:33.232927 | orchestrator | --- 192.168.112.195 ping statistics --- 2026-02-20 04:07:33.232940 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-20 04:07:33.232952 | orchestrator | rtt min/avg/max/mdev = 1.695/4.006/7.728/2.657 ms 2026-02-20 04:07:33.233957 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-20 04:07:33.677965 | orchestrator | ok: Runtime: 0:10:18.745930 2026-02-20 04:07:33.720015 | 2026-02-20 04:07:33.720148 | TASK [Run tempest] 2026-02-20 04:07:34.255985 | orchestrator | skipping: Conditional result was False 2026-02-20 04:07:34.274410 | 2026-02-20 04:07:34.274601 | TASK [Check prometheus alert status] 2026-02-20 04:07:34.816347 | orchestrator | skipping: Conditional result was False 2026-02-20 04:07:34.830973 | 2026-02-20 04:07:34.831132 | PLAY [Upgrade testbed] 2026-02-20 04:07:34.842392 | 2026-02-20 04:07:34.842517 | TASK [Print next ceph version] 2026-02-20 04:07:34.922090 | orchestrator | ok 2026-02-20 04:07:34.931806 | 2026-02-20 04:07:34.931928 | TASK [Print next openstack version] 2026-02-20 04:07:35.011100 | orchestrator | ok 2026-02-20 04:07:35.022595 | 2026-02-20 04:07:35.022722 | TASK [Print next manager version] 2026-02-20 04:07:35.101159 | orchestrator | ok 2026-02-20 04:07:35.111871 | 2026-02-20 04:07:35.112017 | TASK [Set cloud fact (Zuul deployment)] 2026-02-20 04:07:35.163709 | orchestrator | ok 2026-02-20 04:07:35.175593 | 2026-02-20 04:07:35.175725 | TASK [Set cloud fact (local deployment)] 2026-02-20 04:07:35.211277 | orchestrator | skipping: Conditional result was False 2026-02-20 04:07:35.228598 | 2026-02-20 04:07:35.228796 | TASK [Fetch manager address] 2026-02-20 04:07:35.517032 | orchestrator | ok 2026-02-20 04:07:35.525887 | 2026-02-20 04:07:35.526015 | TASK [Set manager_host address] 2026-02-20 04:07:35.602618 | orchestrator | ok 2026-02-20 04:07:35.610513 | 2026-02-20 04:07:35.610637 | TASK [Run upgrade] 2026-02-20 04:07:36.279288 | orchestrator | + set -e 2026-02-20 04:07:36.279454 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-20 04:07:36.279473 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-20 04:07:36.279490 | orchestrator | + CEPH_VERSION=reef 2026-02-20 04:07:36.279500 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-20 04:07:36.279510 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-20 04:07:36.279527 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-02-20 04:07:36.289189 | orchestrator | + set -e 2026-02-20 04:07:36.289315 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-20 04:07:36.289344 | orchestrator | ++ export INTERACTIVE=false 2026-02-20 04:07:36.289375 | orchestrator | ++ INTERACTIVE=false 2026-02-20 04:07:36.289394 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-20 04:07:36.289428 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-20 04:07:36.290327 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-02-20 04:07:36.332244 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-02-20 04:07:36.333070 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-20 04:07:36.371693 | orchestrator | 2026-02-20 04:07:36.371799 | orchestrator | # UPGRADE MANAGER 2026-02-20 04:07:36.371820 | orchestrator | 2026-02-20 04:07:36.371833 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-02-20 04:07:36.371845 | orchestrator | + echo 2026-02-20 04:07:36.371856 | orchestrator | + echo '# UPGRADE MANAGER' 2026-02-20 04:07:36.371869 | orchestrator | + echo 2026-02-20 04:07:36.371881 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-20 04:07:36.371893 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-20 04:07:36.371903 | orchestrator | + CEPH_VERSION=reef 2026-02-20 04:07:36.371915 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-20 04:07:36.371927 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-20 04:07:36.371938 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-02-20 04:07:36.377334 | orchestrator | + set -e 2026-02-20 04:07:36.377455 | orchestrator | + VERSION=10.0.0-rc.1 2026-02-20 04:07:36.377475 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-02-20 04:07:36.384852 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-02-20 04:07:36.384943 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-20 04:07:36.389812 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-20 04:07:36.394330 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-20 04:07:36.401261 | orchestrator | + set -e 2026-02-20 04:07:36.401315 | orchestrator | + pushd /opt/configuration 2026-02-20 04:07:36.401331 | orchestrator | /opt/configuration ~ 2026-02-20 04:07:36.401338 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-20 04:07:36.401346 | orchestrator | + source /opt/venv/bin/activate 2026-02-20 04:07:36.402223 | orchestrator | ++ deactivate nondestructive 2026-02-20 04:07:36.402236 | orchestrator | ++ '[' -n '' ']' 2026-02-20 04:07:36.402246 | orchestrator | ++ '[' -n '' ']' 2026-02-20 04:07:36.403371 | orchestrator | ++ hash -r 2026-02-20 04:07:36.403400 | orchestrator | ++ '[' -n '' ']' 2026-02-20 04:07:36.403406 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-20 04:07:36.403413 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-20 04:07:36.403419 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-20 04:07:36.403427 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-20 04:07:36.403433 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-20 04:07:36.403439 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-20 04:07:36.403445 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-20 04:07:36.403452 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-20 04:07:36.403459 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-20 04:07:36.403465 | orchestrator | ++ export PATH 2026-02-20 04:07:36.403471 | orchestrator | ++ '[' -n '' ']' 2026-02-20 04:07:36.403477 | orchestrator | ++ '[' -z '' ']' 2026-02-20 04:07:36.403483 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-20 04:07:36.403489 | orchestrator | ++ PS1='(venv) ' 2026-02-20 04:07:36.403495 | orchestrator | ++ export PS1 2026-02-20 04:07:36.403501 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-20 04:07:36.403506 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-20 04:07:36.403512 | orchestrator | ++ hash -r 2026-02-20 04:07:36.403521 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-20 04:07:37.323716 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-20 04:07:37.324635 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-20 04:07:37.325947 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-20 04:07:37.327380 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-20 04:07:37.328791 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-20 04:07:37.339120 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-20 04:07:37.340548 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-20 04:07:37.341570 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-20 04:07:37.342977 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-20 04:07:37.373020 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-20 04:07:37.374283 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-20 04:07:37.376519 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-20 04:07:37.377887 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-20 04:07:37.381814 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-20 04:07:37.583605 | orchestrator | ++ which gilt 2026-02-20 04:07:37.584804 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-20 04:07:37.584834 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-20 04:07:37.801790 | orchestrator | osism.cfg-generics: 2026-02-20 04:07:37.898839 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-20 04:07:37.899121 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-20 04:07:37.900151 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-20 04:07:37.901430 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-20 04:07:38.754306 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-20 04:07:38.766899 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-20 04:07:39.078162 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-20 04:07:39.124235 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-20 04:07:39.124350 | orchestrator | + deactivate 2026-02-20 04:07:39.124392 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-20 04:07:39.124416 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-20 04:07:39.124435 | orchestrator | + export PATH 2026-02-20 04:07:39.124455 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-20 04:07:39.124482 | orchestrator | + '[' -n '' ']' 2026-02-20 04:07:39.124519 | orchestrator | + hash -r 2026-02-20 04:07:39.124546 | orchestrator | + '[' -n '' ']' 2026-02-20 04:07:39.124565 | orchestrator | + unset VIRTUAL_ENV 2026-02-20 04:07:39.124582 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-20 04:07:39.124594 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-20 04:07:39.124637 | orchestrator | + unset -f deactivate 2026-02-20 04:07:39.124669 | orchestrator | ~ 2026-02-20 04:07:39.124697 | orchestrator | + popd 2026-02-20 04:07:39.127429 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-02-20 04:07:39.127543 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-20 04:07:39.131145 | orchestrator | + set -e 2026-02-20 04:07:39.131206 | orchestrator | + NAMESPACE=kolla/release 2026-02-20 04:07:39.131224 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-20 04:07:39.138512 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-20 04:07:39.145543 | orchestrator | /opt/configuration ~ 2026-02-20 04:07:39.145654 | orchestrator | + set -e 2026-02-20 04:07:39.145670 | orchestrator | + pushd /opt/configuration 2026-02-20 04:07:39.145681 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-20 04:07:39.145693 | orchestrator | + source /opt/venv/bin/activate 2026-02-20 04:07:39.145704 | orchestrator | ++ deactivate nondestructive 2026-02-20 04:07:39.145721 | orchestrator | ++ '[' -n '' ']' 2026-02-20 04:07:39.145733 | orchestrator | ++ '[' -n '' ']' 2026-02-20 04:07:39.145756 | orchestrator | ++ hash -r 2026-02-20 04:07:39.145775 | orchestrator | ++ '[' -n '' ']' 2026-02-20 04:07:39.145804 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-20 04:07:39.145822 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-20 04:07:39.145841 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-20 04:07:39.145868 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-20 04:07:39.145888 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-20 04:07:39.145919 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-20 04:07:39.145937 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-20 04:07:39.145949 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-20 04:07:39.145967 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-20 04:07:39.145979 | orchestrator | ++ export PATH 2026-02-20 04:07:39.145990 | orchestrator | ++ '[' -n '' ']' 2026-02-20 04:07:39.146060 | orchestrator | ++ '[' -z '' ']' 2026-02-20 04:07:39.146075 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-20 04:07:39.146090 | orchestrator | ++ PS1='(venv) ' 2026-02-20 04:07:39.146102 | orchestrator | ++ export PS1 2026-02-20 04:07:39.146119 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-20 04:07:39.146274 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-20 04:07:39.146294 | orchestrator | ++ hash -r 2026-02-20 04:07:39.146310 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-20 04:07:39.617476 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-20 04:07:39.618343 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-20 04:07:39.619542 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-20 04:07:39.620999 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-20 04:07:39.622059 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-20 04:07:39.632277 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-20 04:07:39.633577 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-20 04:07:39.634755 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-20 04:07:39.636000 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-20 04:07:39.665876 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-20 04:07:39.667084 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-20 04:07:39.668940 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-20 04:07:39.670317 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-20 04:07:39.674561 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-20 04:07:39.876524 | orchestrator | ++ which gilt 2026-02-20 04:07:39.877950 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-20 04:07:39.878015 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-20 04:07:40.053513 | orchestrator | osism.cfg-generics: 2026-02-20 04:07:40.109912 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-20 04:07:40.109996 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-20 04:07:40.110005 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-20 04:07:40.110013 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-20 04:07:40.669578 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-20 04:07:40.682361 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-20 04:07:40.992164 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-20 04:07:41.047721 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-20 04:07:41.047829 | orchestrator | + deactivate 2026-02-20 04:07:41.047867 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-20 04:07:41.047881 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-20 04:07:41.047892 | orchestrator | + export PATH 2026-02-20 04:07:41.047904 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-20 04:07:41.047916 | orchestrator | + '[' -n '' ']' 2026-02-20 04:07:41.047927 | orchestrator | + hash -r 2026-02-20 04:07:41.047952 | orchestrator | + '[' -n '' ']' 2026-02-20 04:07:41.047963 | orchestrator | + unset VIRTUAL_ENV 2026-02-20 04:07:41.047976 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-20 04:07:41.047987 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-20 04:07:41.047999 | orchestrator | + unset -f deactivate 2026-02-20 04:07:41.048010 | orchestrator | + popd 2026-02-20 04:07:41.048022 | orchestrator | ~ 2026-02-20 04:07:41.050354 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-02-20 04:07:41.107661 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-20 04:07:41.108267 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-20 04:07:41.211920 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-20 04:07:41.212029 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-20 04:07:41.219806 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-20 04:07:41.226822 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-02-20 04:07:41.291562 | orchestrator | ++ '[' -1 -le 0 ']' 2026-02-20 04:07:41.292362 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-02-20 04:07:41.397410 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-02-20 04:07:41.397530 | orchestrator | ++ echo true 2026-02-20 04:07:41.398429 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-02-20 04:07:41.399918 | orchestrator | +++ semver 2024.2 2024.2 2026-02-20 04:07:41.489844 | orchestrator | ++ '[' 0 -le 0 ']' 2026-02-20 04:07:41.490892 | orchestrator | +++ semver 2024.2 2025.1 2026-02-20 04:07:41.553912 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-02-20 04:07:41.553989 | orchestrator | ++ echo false 2026-02-20 04:07:41.554458 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-02-20 04:07:41.554475 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-20 04:07:41.554482 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-02-20 04:07:41.554552 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-02-20 04:07:41.554562 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-02-20 04:07:41.560869 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-02-20 04:07:41.560932 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-02-20 04:07:41.579943 | orchestrator | export RABBITMQ3TO4=true 2026-02-20 04:07:41.582361 | orchestrator | + osism update manager 2026-02-20 04:07:46.435795 | orchestrator | Collecting uv 2026-02-20 04:07:46.531329 | orchestrator | Downloading uv-0.10.4-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-02-20 04:07:46.555150 | orchestrator | Downloading uv-0.10.4-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23.1 MB) 2026-02-20 04:07:47.305307 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.1/23.1 MB 35.8 MB/s eta 0:00:00 2026-02-20 04:07:47.368482 | orchestrator | Installing collected packages: uv 2026-02-20 04:07:47.784045 | orchestrator | Successfully installed uv-0.10.4 2026-02-20 04:07:48.270265 | orchestrator | Resolved 11 packages in 296ms 2026-02-20 04:07:48.302562 | orchestrator | Downloading cryptography (4.3MiB) 2026-02-20 04:07:48.303194 | orchestrator | Downloading ansible-core (2.1MiB) 2026-02-20 04:07:48.303431 | orchestrator | Downloading ansible (54.5MiB) 2026-02-20 04:07:48.303956 | orchestrator | Downloading netaddr (2.2MiB) 2026-02-20 04:07:48.642914 | orchestrator | Downloaded netaddr 2026-02-20 04:07:48.760091 | orchestrator | Downloaded cryptography 2026-02-20 04:07:48.773397 | orchestrator | Downloaded ansible-core 2026-02-20 04:07:54.314326 | orchestrator | Downloaded ansible 2026-02-20 04:07:54.314505 | orchestrator | Prepared 11 packages in 6.04s 2026-02-20 04:07:54.849582 | orchestrator | Installed 11 packages in 533ms 2026-02-20 04:07:54.849731 | orchestrator | + ansible==11.11.0 2026-02-20 04:07:54.849747 | orchestrator | + ansible-core==2.18.13 2026-02-20 04:07:54.849761 | orchestrator | + cffi==2.0.0 2026-02-20 04:07:54.849773 | orchestrator | + cryptography==46.0.5 2026-02-20 04:07:54.849797 | orchestrator | + jinja2==3.1.6 2026-02-20 04:07:54.849809 | orchestrator | + markupsafe==3.0.3 2026-02-20 04:07:54.849821 | orchestrator | + netaddr==1.3.0 2026-02-20 04:07:54.849842 | orchestrator | + packaging==26.0 2026-02-20 04:07:54.849861 | orchestrator | + pycparser==3.0 2026-02-20 04:07:54.850101 | orchestrator | + pyyaml==6.0.3 2026-02-20 04:07:54.850135 | orchestrator | + resolvelib==1.0.1 2026-02-20 04:07:55.938964 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-200422m7a8fwz3/tmpkvhhz9hi/ansible-collection-servicesyu28jj0b'... 2026-02-20 04:07:57.417998 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-20 04:07:57.418224 | orchestrator | Already on 'main' 2026-02-20 04:07:57.887379 | orchestrator | Starting galaxy collection install process 2026-02-20 04:07:57.887487 | orchestrator | Process install dependency map 2026-02-20 04:07:57.887502 | orchestrator | Starting collection install process 2026-02-20 04:07:57.887511 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-02-20 04:07:57.887522 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-02-20 04:07:57.887531 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-20 04:07:58.403590 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-200467f13bgypp/tmphirdhq4e/ansible-playbooks-managerx75egi9_'... 2026-02-20 04:07:58.943201 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-20 04:07:58.943292 | orchestrator | Already on 'main' 2026-02-20 04:07:59.208853 | orchestrator | Starting galaxy collection install process 2026-02-20 04:07:59.208951 | orchestrator | Process install dependency map 2026-02-20 04:07:59.208965 | orchestrator | Starting collection install process 2026-02-20 04:07:59.208975 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-02-20 04:07:59.208986 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-02-20 04:07:59.208995 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-02-20 04:07:59.818743 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-02-20 04:07:59.818844 | orchestrator | -vvvv to see details 2026-02-20 04:08:00.217424 | orchestrator | 2026-02-20 04:08:00.217538 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-02-20 04:08:00.217566 | orchestrator | 2026-02-20 04:08:00.217586 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-20 04:08:04.067232 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:04.067360 | orchestrator | 2026-02-20 04:08:04.067386 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-20 04:08:04.138463 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-20 04:08:04.138564 | orchestrator | 2026-02-20 04:08:04.138705 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-20 04:08:05.765019 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:05.765136 | orchestrator | 2026-02-20 04:08:05.765153 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-20 04:08:05.822792 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:05.822898 | orchestrator | 2026-02-20 04:08:05.822916 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-20 04:08:05.897642 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-20 04:08:05.897758 | orchestrator | 2026-02-20 04:08:05.897776 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-20 04:08:09.885334 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-02-20 04:08:09.885447 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-02-20 04:08:09.885464 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-20 04:08:09.885489 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-02-20 04:08:09.885502 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-20 04:08:09.885513 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-20 04:08:09.885525 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-20 04:08:09.885537 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-02-20 04:08:09.885549 | orchestrator | 2026-02-20 04:08:09.885562 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-20 04:08:10.896901 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:10.897004 | orchestrator | 2026-02-20 04:08:10.897020 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-20 04:08:11.797819 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:11.797937 | orchestrator | 2026-02-20 04:08:11.797966 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-20 04:08:11.880744 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-20 04:08:11.880847 | orchestrator | 2026-02-20 04:08:11.880864 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-20 04:08:13.636029 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-02-20 04:08:13.636133 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-02-20 04:08:13.636150 | orchestrator | 2026-02-20 04:08:13.636163 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-20 04:08:14.543957 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:14.544059 | orchestrator | 2026-02-20 04:08:14.544081 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-20 04:08:14.611691 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:08:14.611796 | orchestrator | 2026-02-20 04:08:14.611815 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-20 04:08:14.681668 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-20 04:08:14.681762 | orchestrator | 2026-02-20 04:08:14.681779 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-20 04:08:15.559331 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:15.559424 | orchestrator | 2026-02-20 04:08:15.559436 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-20 04:08:15.618544 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-20 04:08:15.618690 | orchestrator | 2026-02-20 04:08:15.618707 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-20 04:08:17.502593 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-20 04:08:17.502750 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-20 04:08:17.502768 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:17.502782 | orchestrator | 2026-02-20 04:08:17.502794 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-20 04:08:18.433456 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:18.433538 | orchestrator | 2026-02-20 04:08:18.433548 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-20 04:08:18.506257 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:08:18.506337 | orchestrator | 2026-02-20 04:08:18.506362 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-20 04:08:18.604193 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-20 04:08:18.604296 | orchestrator | 2026-02-20 04:08:18.604316 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-20 04:08:19.213790 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:19.213893 | orchestrator | 2026-02-20 04:08:19.213910 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-20 04:08:19.726126 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:19.726216 | orchestrator | 2026-02-20 04:08:19.726229 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-20 04:08:21.461390 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-02-20 04:08:21.461513 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-02-20 04:08:21.461530 | orchestrator | 2026-02-20 04:08:21.461543 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-20 04:08:22.482918 | orchestrator | changed: [testbed-manager] 2026-02-20 04:08:22.483009 | orchestrator | 2026-02-20 04:08:22.483024 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-20 04:08:23.020383 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:23.020455 | orchestrator | 2026-02-20 04:08:23.020461 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-20 04:08:23.539386 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:23.539468 | orchestrator | 2026-02-20 04:08:23.539501 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-20 04:08:23.590381 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:08:23.590450 | orchestrator | 2026-02-20 04:08:23.590457 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-20 04:08:23.656319 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-20 04:08:23.656393 | orchestrator | 2026-02-20 04:08:23.656403 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-20 04:08:23.712976 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:23.713084 | orchestrator | 2026-02-20 04:08:23.713098 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-20 04:08:26.516320 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-02-20 04:08:26.516450 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-02-20 04:08:26.516480 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-02-20 04:08:26.516501 | orchestrator | 2026-02-20 04:08:26.516518 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-20 04:08:27.465128 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:27.465266 | orchestrator | 2026-02-20 04:08:27.465285 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-20 04:08:28.381810 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:28.381887 | orchestrator | 2026-02-20 04:08:28.381894 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-20 04:08:29.349267 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:29.349399 | orchestrator | 2026-02-20 04:08:29.349454 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-20 04:08:29.422083 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-20 04:08:29.422207 | orchestrator | 2026-02-20 04:08:29.422233 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-20 04:08:29.475258 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:29.475335 | orchestrator | 2026-02-20 04:08:29.475345 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-20 04:08:30.414199 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-02-20 04:08:30.414306 | orchestrator | 2026-02-20 04:08:30.414323 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-20 04:08:30.493152 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-20 04:08:30.493242 | orchestrator | 2026-02-20 04:08:30.493256 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-20 04:08:31.476282 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:31.476386 | orchestrator | 2026-02-20 04:08:31.476403 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-20 04:08:32.506763 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:32.506911 | orchestrator | 2026-02-20 04:08:32.506931 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-20 04:08:32.577978 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:08:32.578124 | orchestrator | 2026-02-20 04:08:32.578138 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-20 04:08:32.637869 | orchestrator | ok: [testbed-manager] 2026-02-20 04:08:32.637959 | orchestrator | 2026-02-20 04:08:32.637975 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-20 04:08:33.840749 | orchestrator | changed: [testbed-manager] 2026-02-20 04:08:33.840855 | orchestrator | 2026-02-20 04:08:33.840872 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-20 04:09:33.880857 | orchestrator | changed: [testbed-manager] 2026-02-20 04:09:33.881004 | orchestrator | 2026-02-20 04:09:33.881031 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-20 04:09:35.025330 | orchestrator | ok: [testbed-manager] 2026-02-20 04:09:35.025447 | orchestrator | 2026-02-20 04:09:35.025463 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-20 04:09:35.079036 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:09:35.079159 | orchestrator | 2026-02-20 04:09:35.079186 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-20 04:09:35.904418 | orchestrator | ok: [testbed-manager] 2026-02-20 04:09:35.904518 | orchestrator | 2026-02-20 04:09:35.904534 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-20 04:09:35.977003 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:09:35.977168 | orchestrator | 2026-02-20 04:09:35.977192 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-20 04:09:35.977206 | orchestrator | 2026-02-20 04:09:35.977218 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-20 04:09:50.884415 | orchestrator | changed: [testbed-manager] 2026-02-20 04:09:50.884534 | orchestrator | 2026-02-20 04:09:50.884554 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-20 04:10:50.947907 | orchestrator | Pausing for 60 seconds 2026-02-20 04:10:50.948027 | orchestrator | changed: [testbed-manager] 2026-02-20 04:10:50.948043 | orchestrator | 2026-02-20 04:10:50.948057 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-02-20 04:10:50.998321 | orchestrator | ok: [testbed-manager] 2026-02-20 04:10:50.998416 | orchestrator | 2026-02-20 04:10:50.998432 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-20 04:10:54.069284 | orchestrator | changed: [testbed-manager] 2026-02-20 04:10:54.069360 | orchestrator | 2026-02-20 04:10:54.069367 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-20 04:11:56.581060 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-20 04:11:56.581176 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-20 04:11:56.581192 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-20 04:11:56.581205 | orchestrator | changed: [testbed-manager] 2026-02-20 04:11:56.581218 | orchestrator | 2026-02-20 04:11:56.581230 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-20 04:12:07.204094 | orchestrator | changed: [testbed-manager] 2026-02-20 04:12:07.204201 | orchestrator | 2026-02-20 04:12:07.204216 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-20 04:12:07.289712 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-20 04:12:07.289828 | orchestrator | 2026-02-20 04:12:07.289841 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-20 04:12:07.289851 | orchestrator | 2026-02-20 04:12:07.289860 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-20 04:12:07.363222 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:12:07.363335 | orchestrator | 2026-02-20 04:12:07.363352 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-20 04:12:07.451918 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-20 04:12:07.452014 | orchestrator | 2026-02-20 04:12:07.452045 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-20 04:12:08.551013 | orchestrator | changed: [testbed-manager] 2026-02-20 04:12:08.551118 | orchestrator | 2026-02-20 04:12:08.551138 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-20 04:12:11.963139 | orchestrator | ok: [testbed-manager] 2026-02-20 04:12:11.963290 | orchestrator | 2026-02-20 04:12:11.963315 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-20 04:12:12.064803 | orchestrator | ok: [testbed-manager] => { 2026-02-20 04:12:12.064899 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-20 04:12:12.064910 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-20 04:12:12.064918 | orchestrator | "Checking running containers against expected versions...", 2026-02-20 04:12:12.064927 | orchestrator | "", 2026-02-20 04:12:12.064934 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-20 04:12:12.064941 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-20 04:12:12.064948 | orchestrator | " Enabled: true", 2026-02-20 04:12:12.064954 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-20 04:12:12.064960 | orchestrator | " Status: ✅ MATCH", 2026-02-20 04:12:12.064966 | orchestrator | "", 2026-02-20 04:12:12.064972 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-20 04:12:12.064979 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-20 04:12:12.064985 | orchestrator | " Enabled: true", 2026-02-20 04:12:12.064992 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-20 04:12:12.064999 | orchestrator | " Status: ✅ MATCH", 2026-02-20 04:12:12.065006 | orchestrator | "", 2026-02-20 04:12:12.065012 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-20 04:12:12.065018 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-20 04:12:12.065024 | orchestrator | " Enabled: true", 2026-02-20 04:12:12.065030 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-20 04:12:12.065036 | orchestrator | " Status: ✅ MATCH", 2026-02-20 04:12:12.065043 | orchestrator | "", 2026-02-20 04:12:12.065048 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-20 04:12:12.065054 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-20 04:12:12.065060 | orchestrator | " Enabled: true", 2026-02-20 04:12:12.065066 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-20 04:12:12.065073 | orchestrator | " Status: ✅ MATCH", 2026-02-20 04:12:12.065079 | orchestrator | "", 2026-02-20 04:12:12.065085 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-20 04:12:12.065091 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-20 04:12:12.065097 | orchestrator | " Enabled: true", 2026-02-20 04:12:12.065103 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-20 04:12:12.065110 | orchestrator | " Status: ✅ MATCH", 2026-02-20 04:12:12.065116 | orchestrator | "", 2026-02-20 04:12:12.065122 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-20 04:12:12.065147 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-20 04:12:12.065154 | orchestrator | " Enabled: true", 2026-02-20 04:12:12.065161 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-20 04:12:12.065166 | orchestrator | " Status: ✅ MATCH", 2026-02-20 04:12:12.065172 | orchestrator | "", 2026-02-20 04:12:12.065178 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-20 04:12:12.065184 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-20 04:12:12.065190 | orchestrator | " Enabled: true", 2026-02-20 04:12:12.065196 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-20 04:12:12.065201 | orchestrator | " Status: ✅ MATCH", 2026-02-20 04:12:12.065207 | orchestrator | "", 2026-02-20 04:12:12.065214 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-20 04:12:12.065220 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-20 04:12:12.065226 | orchestrator | " Enabled: true", 2026-02-20 04:12:12.065240 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-20 04:12:12.065247 | orchestrator | " Status: ✅ MATCH", 2026-02-20 04:12:12.065252 | orchestrator | "", 2026-02-20 04:12:12.065258 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-20 04:12:12.065264 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-20 04:12:12.065270 | orchestrator | " Enabled: true", 2026-02-20 04:12:12.065277 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-20 04:12:12.065283 | orchestrator | " Status: ✅ MATCH", 2026-02-20 04:12:12.065289 | orchestrator | "", 2026-02-20 04:12:12.065299 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-20 04:12:12.065305 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-20 04:12:12.065312 | orchestrator | " Enabled: true", 2026-02-20 04:12:12.065318 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-20 04:12:12.065325 | orchestrator | " Status: ✅ MATCH", 2026-02-20 04:12:12.065331 | orchestrator | "", 2026-02-20 04:12:12.065337 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-20 04:12:12.065344 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-20 04:12:12.065350 | orchestrator | " Enabled: true", 2026-02-20 04:12:12.065356 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-20 04:12:12.065363 | orchestrator | " Status: ✅ MATCH", 2026-02-20 04:12:12.065370 | orchestrator | "", 2026-02-20 04:12:12.065377 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-20 04:12:12.065384 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-20 04:12:12.065391 | orchestrator | " Enabled: true", 2026-02-20 04:12:12.065398 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-20 04:12:12.065404 | orchestrator | " Status: ✅ MATCH", 2026-02-20 04:12:12.065411 | orchestrator | "", 2026-02-20 04:12:12.065418 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-20 04:12:12.065425 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-20 04:12:12.065433 | orchestrator | " Enabled: true", 2026-02-20 04:12:12.065440 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-20 04:12:12.065447 | orchestrator | " Status: ✅ MATCH", 2026-02-20 04:12:12.065454 | orchestrator | "", 2026-02-20 04:12:12.065461 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-20 04:12:12.065468 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-20 04:12:12.065475 | orchestrator | " Enabled: true", 2026-02-20 04:12:12.065481 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-20 04:12:12.065508 | orchestrator | " Status: ✅ MATCH", 2026-02-20 04:12:12.065517 | orchestrator | "", 2026-02-20 04:12:12.065524 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-20 04:12:12.065530 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-20 04:12:12.065567 | orchestrator | " Enabled: true", 2026-02-20 04:12:12.065576 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-20 04:12:12.065583 | orchestrator | " Status: ✅ MATCH", 2026-02-20 04:12:12.065589 | orchestrator | "", 2026-02-20 04:12:12.065596 | orchestrator | "=== Summary ===", 2026-02-20 04:12:12.065603 | orchestrator | "Errors (version mismatches): 0", 2026-02-20 04:12:12.065610 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-20 04:12:12.065616 | orchestrator | "", 2026-02-20 04:12:12.065622 | orchestrator | "✅ All running containers match expected versions!" 2026-02-20 04:12:12.065630 | orchestrator | ] 2026-02-20 04:12:12.065634 | orchestrator | } 2026-02-20 04:12:12.065638 | orchestrator | 2026-02-20 04:12:12.065642 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-20 04:12:12.132463 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:12:12.132640 | orchestrator | 2026-02-20 04:12:12.132661 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:12:12.132675 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-02-20 04:12:12.132687 | orchestrator | 2026-02-20 04:12:24.577433 | orchestrator | 2026-02-20 04:12:24 | INFO  | Task 12af98b3-6e46-476b-b9c0-1c2fe1498665 (sync inventory) is running in background. Output coming soon. 2026-02-20 04:12:51.922461 | orchestrator | 2026-02-20 04:12:26 | INFO  | Starting group_vars file reorganization 2026-02-20 04:12:51.922612 | orchestrator | 2026-02-20 04:12:26 | INFO  | Moved 0 file(s) to their respective directories 2026-02-20 04:12:51.922632 | orchestrator | 2026-02-20 04:12:26 | INFO  | Group_vars file reorganization completed 2026-02-20 04:12:51.922665 | orchestrator | 2026-02-20 04:12:29 | INFO  | Starting variable preparation from inventory 2026-02-20 04:12:51.922677 | orchestrator | 2026-02-20 04:12:32 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-20 04:12:51.922689 | orchestrator | 2026-02-20 04:12:32 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-20 04:12:51.922700 | orchestrator | 2026-02-20 04:12:32 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-20 04:12:51.922711 | orchestrator | 2026-02-20 04:12:32 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-20 04:12:51.922722 | orchestrator | 2026-02-20 04:12:32 | INFO  | Variable preparation completed 2026-02-20 04:12:51.922733 | orchestrator | 2026-02-20 04:12:33 | INFO  | Starting inventory overwrite handling 2026-02-20 04:12:51.922744 | orchestrator | 2026-02-20 04:12:33 | INFO  | Handling group overwrites in 99-overwrite 2026-02-20 04:12:51.922755 | orchestrator | 2026-02-20 04:12:33 | INFO  | Removing group frr:children from 60-generic 2026-02-20 04:12:51.922766 | orchestrator | 2026-02-20 04:12:33 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-20 04:12:51.922777 | orchestrator | 2026-02-20 04:12:33 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-20 04:12:51.922788 | orchestrator | 2026-02-20 04:12:33 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-20 04:12:51.922799 | orchestrator | 2026-02-20 04:12:33 | INFO  | Handling group overwrites in 20-roles 2026-02-20 04:12:51.922811 | orchestrator | 2026-02-20 04:12:33 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-20 04:12:51.922822 | orchestrator | 2026-02-20 04:12:33 | INFO  | Removed 5 group(s) in total 2026-02-20 04:12:51.922833 | orchestrator | 2026-02-20 04:12:33 | INFO  | Inventory overwrite handling completed 2026-02-20 04:12:51.922844 | orchestrator | 2026-02-20 04:12:34 | INFO  | Starting merge of inventory files 2026-02-20 04:12:51.922855 | orchestrator | 2026-02-20 04:12:34 | INFO  | Inventory files merged successfully 2026-02-20 04:12:51.922890 | orchestrator | 2026-02-20 04:12:40 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-20 04:12:51.922902 | orchestrator | 2026-02-20 04:12:50 | INFO  | Successfully wrote ClusterShell configuration 2026-02-20 04:12:52.121950 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-20 04:12:52.122142 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-20 04:12:52.122173 | orchestrator | + local max_attempts=60 2026-02-20 04:12:52.122194 | orchestrator | + local name=kolla-ansible 2026-02-20 04:12:52.122214 | orchestrator | + local attempt_num=1 2026-02-20 04:12:52.122442 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-20 04:12:52.151385 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-20 04:12:52.151465 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-20 04:12:52.151477 | orchestrator | + local max_attempts=60 2026-02-20 04:12:52.151486 | orchestrator | + local name=osism-ansible 2026-02-20 04:12:52.151494 | orchestrator | + local attempt_num=1 2026-02-20 04:12:52.151990 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-20 04:12:52.178430 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-20 04:12:52.178518 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-20 04:12:52.358482 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-20 04:12:52.358635 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-20 04:12:52.358662 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-20 04:12:52.358682 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-20 04:12:52.358698 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-02-20 04:12:52.358710 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-02-20 04:12:52.358721 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-02-20 04:12:52.358732 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up About a minute (healthy) 2026-02-20 04:12:52.358743 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 23 seconds ago 2026-02-20 04:12:52.358754 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-02-20 04:12:52.358765 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-02-20 04:12:52.358776 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-02-20 04:12:52.358787 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-20 04:12:52.358825 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-02-20 04:12:52.358837 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-02-20 04:12:52.358848 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-02-20 04:12:52.363126 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-02-20 04:12:52.363223 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-02-20 04:12:52.363248 | orchestrator | + osism apply facts 2026-02-20 04:13:04.458491 | orchestrator | 2026-02-20 04:13:04 | INFO  | Task 2fbe4086-b38b-4e33-9df4-637504a1e241 (facts) was prepared for execution. 2026-02-20 04:13:04.458643 | orchestrator | 2026-02-20 04:13:04 | INFO  | It takes a moment until task 2fbe4086-b38b-4e33-9df4-637504a1e241 (facts) has been started and output is visible here. 2026-02-20 04:13:22.740218 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-20 04:13:22.740332 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-20 04:13:22.740350 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-20 04:13:22.740356 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-20 04:13:22.740369 | orchestrator | 2026-02-20 04:13:22.740376 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-20 04:13:22.740382 | orchestrator | 2026-02-20 04:13:22.740388 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-20 04:13:22.740394 | orchestrator | Friday 20 February 2026 04:13:10 +0000 (0:00:01.605) 0:00:01.605 ******* 2026-02-20 04:13:22.740401 | orchestrator | ok: [testbed-manager] 2026-02-20 04:13:22.740416 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:13:22.740422 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:13:22.740428 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:13:22.740434 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:13:22.740440 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:13:22.740446 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:13:22.740452 | orchestrator | 2026-02-20 04:13:22.740458 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-20 04:13:22.740464 | orchestrator | Friday 20 February 2026 04:13:12 +0000 (0:00:02.252) 0:00:03.858 ******* 2026-02-20 04:13:22.740470 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:13:22.740476 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:13:22.740497 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:13:22.740503 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:13:22.740512 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:13:22.740518 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:13:22.740825 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:13:22.740843 | orchestrator | 2026-02-20 04:13:22.740855 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-20 04:13:22.740864 | orchestrator | 2026-02-20 04:13:22.740874 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-20 04:13:22.740884 | orchestrator | Friday 20 February 2026 04:13:14 +0000 (0:00:01.619) 0:00:05.477 ******* 2026-02-20 04:13:22.740894 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:13:22.740905 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:13:22.740915 | orchestrator | ok: [testbed-manager] 2026-02-20 04:13:22.740926 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:13:22.740961 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:13:22.740968 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:13:22.740975 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:13:22.740982 | orchestrator | 2026-02-20 04:13:22.740988 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-20 04:13:22.740995 | orchestrator | 2026-02-20 04:13:22.741002 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-20 04:13:22.741009 | orchestrator | Friday 20 February 2026 04:13:20 +0000 (0:00:06.312) 0:00:11.790 ******* 2026-02-20 04:13:22.741016 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:13:22.741023 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:13:22.741029 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:13:22.741036 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:13:22.741043 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:13:22.741049 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:13:22.741055 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:13:22.741062 | orchestrator | 2026-02-20 04:13:22.741069 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:13:22.741076 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 04:13:22.741084 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 04:13:22.741091 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 04:13:22.741098 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 04:13:22.741104 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 04:13:22.741111 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 04:13:22.741117 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 04:13:22.741124 | orchestrator | 2026-02-20 04:13:22.741131 | orchestrator | 2026-02-20 04:13:22.741138 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 04:13:22.741145 | orchestrator | Friday 20 February 2026 04:13:22 +0000 (0:00:01.649) 0:00:13.439 ******* 2026-02-20 04:13:22.741151 | orchestrator | =============================================================================== 2026-02-20 04:13:22.741158 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.31s 2026-02-20 04:13:22.741164 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 2.25s 2026-02-20 04:13:22.741169 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.65s 2026-02-20 04:13:22.741176 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.62s 2026-02-20 04:13:23.061350 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-20 04:13:23.135099 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-20 04:13:23.135193 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-20 04:13:23.158085 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-02-20 04:13:23.158172 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-02-20 04:13:23.162249 | orchestrator | + set -e 2026-02-20 04:13:23.162328 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-02-20 04:13:23.162346 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-20 04:13:23.167691 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-02-20 04:13:23.176050 | orchestrator | 2026-02-20 04:13:23.176122 | orchestrator | # UPGRADE SERVICES 2026-02-20 04:13:23.176154 | orchestrator | 2026-02-20 04:13:23.176161 | orchestrator | + set -e 2026-02-20 04:13:23.176169 | orchestrator | + echo 2026-02-20 04:13:23.176176 | orchestrator | + echo '# UPGRADE SERVICES' 2026-02-20 04:13:23.176183 | orchestrator | + echo 2026-02-20 04:13:23.176190 | orchestrator | + source /opt/manager-vars.sh 2026-02-20 04:13:23.176617 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-20 04:13:23.176637 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-20 04:13:23.176655 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-20 04:13:23.176659 | orchestrator | ++ CEPH_VERSION=reef 2026-02-20 04:13:23.176664 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-20 04:13:23.176669 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-20 04:13:23.176673 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-20 04:13:23.176677 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-20 04:13:23.176681 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-20 04:13:23.176697 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-20 04:13:23.176701 | orchestrator | ++ export ARA=false 2026-02-20 04:13:23.176706 | orchestrator | ++ ARA=false 2026-02-20 04:13:23.176709 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-20 04:13:23.176713 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-20 04:13:23.176750 | orchestrator | ++ export TEMPEST=false 2026-02-20 04:13:23.176755 | orchestrator | ++ TEMPEST=false 2026-02-20 04:13:23.176759 | orchestrator | ++ export IS_ZUUL=true 2026-02-20 04:13:23.176763 | orchestrator | ++ IS_ZUUL=true 2026-02-20 04:13:23.176767 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 04:13:23.176771 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 04:13:23.176775 | orchestrator | ++ export EXTERNAL_API=false 2026-02-20 04:13:23.176779 | orchestrator | ++ EXTERNAL_API=false 2026-02-20 04:13:23.176908 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-20 04:13:23.176916 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-20 04:13:23.176920 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-20 04:13:23.176924 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-20 04:13:23.176928 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-20 04:13:23.176932 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-20 04:13:23.176936 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-20 04:13:23.176940 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-20 04:13:23.177052 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-02-20 04:13:23.177087 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-02-20 04:13:23.177093 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-20 04:13:23.183169 | orchestrator | + set -e 2026-02-20 04:13:23.183203 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-20 04:13:23.184505 | orchestrator | 2026-02-20 04:13:23.184596 | orchestrator | # PULL IMAGES 2026-02-20 04:13:23.184611 | orchestrator | 2026-02-20 04:13:23.184622 | orchestrator | ++ export INTERACTIVE=false 2026-02-20 04:13:23.184634 | orchestrator | ++ INTERACTIVE=false 2026-02-20 04:13:23.184645 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-20 04:13:23.184656 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-20 04:13:23.184667 | orchestrator | + source /opt/manager-vars.sh 2026-02-20 04:13:23.184678 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-20 04:13:23.184689 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-20 04:13:23.184700 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-20 04:13:23.184711 | orchestrator | ++ CEPH_VERSION=reef 2026-02-20 04:13:23.184722 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-20 04:13:23.184734 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-20 04:13:23.184745 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-20 04:13:23.184757 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-20 04:13:23.184768 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-20 04:13:23.184779 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-20 04:13:23.184790 | orchestrator | ++ export ARA=false 2026-02-20 04:13:23.184801 | orchestrator | ++ ARA=false 2026-02-20 04:13:23.184812 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-20 04:13:23.184823 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-20 04:13:23.184834 | orchestrator | ++ export TEMPEST=false 2026-02-20 04:13:23.184846 | orchestrator | ++ TEMPEST=false 2026-02-20 04:13:23.184857 | orchestrator | ++ export IS_ZUUL=true 2026-02-20 04:13:23.184868 | orchestrator | ++ IS_ZUUL=true 2026-02-20 04:13:23.184879 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 04:13:23.184891 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 04:13:23.184902 | orchestrator | ++ export EXTERNAL_API=false 2026-02-20 04:13:23.184912 | orchestrator | ++ EXTERNAL_API=false 2026-02-20 04:13:23.184923 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-20 04:13:23.184968 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-20 04:13:23.184980 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-20 04:13:23.184991 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-20 04:13:23.185025 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-20 04:13:23.185037 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-20 04:13:23.185048 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-20 04:13:23.185059 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-20 04:13:23.185071 | orchestrator | + echo 2026-02-20 04:13:23.185084 | orchestrator | + echo '# PULL IMAGES' 2026-02-20 04:13:23.185098 | orchestrator | + echo 2026-02-20 04:13:23.185215 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-20 04:13:23.242784 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-20 04:13:23.242882 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-20 04:13:25.295212 | orchestrator | 2026-02-20 04:13:25 | INFO  | Trying to run play pull-images in environment custom 2026-02-20 04:13:35.541637 | orchestrator | 2026-02-20 04:13:35 | INFO  | Task 982cfafd-15c5-46c1-96ec-db3e2e30897f (pull-images) was prepared for execution. 2026-02-20 04:13:35.541750 | orchestrator | 2026-02-20 04:13:35 | INFO  | Task 982cfafd-15c5-46c1-96ec-db3e2e30897f is running in background. No more output. Check ARA for logs. 2026-02-20 04:13:35.848103 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-02-20 04:13:35.857780 | orchestrator | + set -e 2026-02-20 04:13:35.857868 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-20 04:13:35.857884 | orchestrator | ++ export INTERACTIVE=false 2026-02-20 04:13:35.857896 | orchestrator | ++ INTERACTIVE=false 2026-02-20 04:13:35.857908 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-20 04:13:35.857919 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-20 04:13:35.857931 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-20 04:13:35.859127 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-20 04:13:35.873494 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-20 04:13:35.873596 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-20 04:13:35.873670 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-02-20 04:13:35.923095 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-20 04:13:35.923203 | orchestrator | + osism apply frr 2026-02-20 04:13:47.953596 | orchestrator | 2026-02-20 04:13:47 | INFO  | Task 2e696054-26fd-4eb1-b58d-e2c9c5dc7b33 (frr) was prepared for execution. 2026-02-20 04:13:47.953715 | orchestrator | 2026-02-20 04:13:47 | INFO  | It takes a moment until task 2e696054-26fd-4eb1-b58d-e2c9c5dc7b33 (frr) has been started and output is visible here. 2026-02-20 04:14:17.650426 | orchestrator | 2026-02-20 04:14:17.650601 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-20 04:14:17.650620 | orchestrator | 2026-02-20 04:14:17.650632 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-20 04:14:17.650644 | orchestrator | Friday 20 February 2026 04:13:54 +0000 (0:00:03.162) 0:00:03.162 ******* 2026-02-20 04:14:17.650656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-20 04:14:17.650668 | orchestrator | 2026-02-20 04:14:17.650679 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-20 04:14:17.650690 | orchestrator | Friday 20 February 2026 04:13:56 +0000 (0:00:01.930) 0:00:05.093 ******* 2026-02-20 04:14:17.650702 | orchestrator | ok: [testbed-manager] 2026-02-20 04:14:17.650714 | orchestrator | 2026-02-20 04:14:17.650725 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-20 04:14:17.650736 | orchestrator | Friday 20 February 2026 04:13:58 +0000 (0:00:02.117) 0:00:07.210 ******* 2026-02-20 04:14:17.650748 | orchestrator | ok: [testbed-manager] 2026-02-20 04:14:17.650759 | orchestrator | 2026-02-20 04:14:17.650770 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-20 04:14:17.650781 | orchestrator | Friday 20 February 2026 04:14:01 +0000 (0:00:02.488) 0:00:09.699 ******* 2026-02-20 04:14:17.650792 | orchestrator | ok: [testbed-manager] 2026-02-20 04:14:17.650802 | orchestrator | 2026-02-20 04:14:17.650813 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-20 04:14:17.650824 | orchestrator | Friday 20 February 2026 04:14:03 +0000 (0:00:01.770) 0:00:11.470 ******* 2026-02-20 04:14:17.650860 | orchestrator | ok: [testbed-manager] 2026-02-20 04:14:17.650872 | orchestrator | 2026-02-20 04:14:17.650883 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-20 04:14:17.650894 | orchestrator | Friday 20 February 2026 04:14:05 +0000 (0:00:01.801) 0:00:13.271 ******* 2026-02-20 04:14:17.650904 | orchestrator | ok: [testbed-manager] 2026-02-20 04:14:17.650915 | orchestrator | 2026-02-20 04:14:17.650926 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-20 04:14:17.650937 | orchestrator | Friday 20 February 2026 04:14:07 +0000 (0:00:02.285) 0:00:15.557 ******* 2026-02-20 04:14:17.650948 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:14:17.650962 | orchestrator | 2026-02-20 04:14:17.650975 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-20 04:14:17.650988 | orchestrator | Friday 20 February 2026 04:14:08 +0000 (0:00:01.092) 0:00:16.649 ******* 2026-02-20 04:14:17.651001 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:14:17.651014 | orchestrator | 2026-02-20 04:14:17.651026 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-20 04:14:17.651037 | orchestrator | Friday 20 February 2026 04:14:09 +0000 (0:00:01.113) 0:00:17.763 ******* 2026-02-20 04:14:17.651048 | orchestrator | ok: [testbed-manager] 2026-02-20 04:14:17.651059 | orchestrator | 2026-02-20 04:14:17.651070 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-20 04:14:17.651080 | orchestrator | Friday 20 February 2026 04:14:11 +0000 (0:00:01.865) 0:00:19.629 ******* 2026-02-20 04:14:17.651091 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-20 04:14:17.651119 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-20 04:14:17.651131 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-20 04:14:17.651143 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-20 04:14:17.651154 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-20 04:14:17.651166 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-20 04:14:17.651177 | orchestrator | 2026-02-20 04:14:17.651187 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-20 04:14:17.651198 | orchestrator | Friday 20 February 2026 04:14:14 +0000 (0:00:03.532) 0:00:23.161 ******* 2026-02-20 04:14:17.651209 | orchestrator | ok: [testbed-manager] 2026-02-20 04:14:17.651220 | orchestrator | 2026-02-20 04:14:17.651230 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:14:17.651241 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 04:14:17.651252 | orchestrator | 2026-02-20 04:14:17.651263 | orchestrator | 2026-02-20 04:14:17.651273 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 04:14:17.651284 | orchestrator | Friday 20 February 2026 04:14:17 +0000 (0:00:02.414) 0:00:25.576 ******* 2026-02-20 04:14:17.651295 | orchestrator | =============================================================================== 2026-02-20 04:14:17.651305 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.53s 2026-02-20 04:14:17.651316 | orchestrator | osism.services.frr : Install frr package -------------------------------- 2.49s 2026-02-20 04:14:17.651326 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.41s 2026-02-20 04:14:17.651337 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.29s 2026-02-20 04:14:17.651348 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.12s 2026-02-20 04:14:17.651359 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 1.93s 2026-02-20 04:14:17.651369 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.87s 2026-02-20 04:14:17.651389 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.80s 2026-02-20 04:14:17.651416 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.77s 2026-02-20 04:14:17.651428 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.11s 2026-02-20 04:14:17.651439 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.09s 2026-02-20 04:14:18.011937 | orchestrator | + osism apply kubernetes 2026-02-20 04:14:20.137916 | orchestrator | 2026-02-20 04:14:20 | INFO  | Task 2c180044-f169-401a-b6bd-47281f03875e (kubernetes) was prepared for execution. 2026-02-20 04:14:20.138077 | orchestrator | 2026-02-20 04:14:20 | INFO  | It takes a moment until task 2c180044-f169-401a-b6bd-47281f03875e (kubernetes) has been started and output is visible here. 2026-02-20 04:15:02.397334 | orchestrator | 2026-02-20 04:15:02.397436 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-20 04:15:02.397449 | orchestrator | 2026-02-20 04:15:02.397458 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-20 04:15:02.397468 | orchestrator | Friday 20 February 2026 04:14:26 +0000 (0:00:01.678) 0:00:01.678 ******* 2026-02-20 04:15:02.397476 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:15:02.397486 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:15:02.397494 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:15:02.397502 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:15:02.397548 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:15:02.397558 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:15:02.397566 | orchestrator | 2026-02-20 04:15:02.397574 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-20 04:15:02.397583 | orchestrator | Friday 20 February 2026 04:14:29 +0000 (0:00:03.610) 0:00:05.288 ******* 2026-02-20 04:15:02.397591 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:15:02.397601 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:15:02.397609 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:15:02.397617 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:15:02.397625 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:15:02.397633 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:15:02.397641 | orchestrator | 2026-02-20 04:15:02.397649 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-20 04:15:02.397657 | orchestrator | Friday 20 February 2026 04:14:31 +0000 (0:00:01.628) 0:00:06.917 ******* 2026-02-20 04:15:02.397666 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:15:02.397674 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:15:02.397682 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:15:02.397690 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:15:02.397698 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:15:02.397706 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:15:02.397714 | orchestrator | 2026-02-20 04:15:02.397722 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-20 04:15:02.397730 | orchestrator | Friday 20 February 2026 04:14:33 +0000 (0:00:01.656) 0:00:08.573 ******* 2026-02-20 04:15:02.397738 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:15:02.397746 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:15:02.397754 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:15:02.397762 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:15:02.397770 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:15:02.397778 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:15:02.397786 | orchestrator | 2026-02-20 04:15:02.397794 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-20 04:15:02.397802 | orchestrator | Friday 20 February 2026 04:14:36 +0000 (0:00:03.162) 0:00:11.736 ******* 2026-02-20 04:15:02.397810 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:15:02.397818 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:15:02.397826 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:15:02.397834 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:15:02.397861 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:15:02.397871 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:15:02.397880 | orchestrator | 2026-02-20 04:15:02.397890 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-20 04:15:02.397899 | orchestrator | Friday 20 February 2026 04:14:38 +0000 (0:00:02.249) 0:00:13.985 ******* 2026-02-20 04:15:02.397909 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:15:02.397918 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:15:02.397927 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:15:02.397937 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:15:02.397946 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:15:02.397955 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:15:02.397965 | orchestrator | 2026-02-20 04:15:02.397974 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-20 04:15:02.397984 | orchestrator | Friday 20 February 2026 04:14:40 +0000 (0:00:02.142) 0:00:16.128 ******* 2026-02-20 04:15:02.397993 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:15:02.398003 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:15:02.398012 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:15:02.398076 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:15:02.398086 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:15:02.398095 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:15:02.398105 | orchestrator | 2026-02-20 04:15:02.398114 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-20 04:15:02.398124 | orchestrator | Friday 20 February 2026 04:14:42 +0000 (0:00:01.853) 0:00:17.981 ******* 2026-02-20 04:15:02.398133 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:15:02.398142 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:15:02.398152 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:15:02.398161 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:15:02.398178 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:15:02.398188 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:15:02.398198 | orchestrator | 2026-02-20 04:15:02.398207 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-20 04:15:02.398215 | orchestrator | Friday 20 February 2026 04:14:44 +0000 (0:00:01.768) 0:00:19.749 ******* 2026-02-20 04:15:02.398223 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-20 04:15:02.398232 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-20 04:15:02.398240 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:15:02.398248 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-20 04:15:02.398256 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-20 04:15:02.398264 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:15:02.398272 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-20 04:15:02.398280 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-20 04:15:02.398288 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:15:02.398296 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-20 04:15:02.398304 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-20 04:15:02.398312 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:15:02.398337 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-20 04:15:02.398345 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-20 04:15:02.398353 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:15:02.398361 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-20 04:15:02.398369 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-20 04:15:02.398377 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:15:02.398385 | orchestrator | 2026-02-20 04:15:02.398400 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-20 04:15:02.398408 | orchestrator | Friday 20 February 2026 04:14:46 +0000 (0:00:02.020) 0:00:21.770 ******* 2026-02-20 04:15:02.398416 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:15:02.398424 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:15:02.398469 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:15:02.398477 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:15:02.398485 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:15:02.398492 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:15:02.398500 | orchestrator | 2026-02-20 04:15:02.398508 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-20 04:15:02.398585 | orchestrator | Friday 20 February 2026 04:14:48 +0000 (0:00:02.042) 0:00:23.813 ******* 2026-02-20 04:15:02.398597 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:15:02.398610 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:15:02.398622 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:15:02.398635 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:15:02.398648 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:15:02.398661 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:15:02.398673 | orchestrator | 2026-02-20 04:15:02.398687 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-20 04:15:02.398700 | orchestrator | Friday 20 February 2026 04:14:50 +0000 (0:00:01.967) 0:00:25.780 ******* 2026-02-20 04:15:02.398713 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:15:02.398726 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:15:02.398740 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:15:02.398754 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:15:02.398768 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:15:02.398776 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:15:02.398784 | orchestrator | 2026-02-20 04:15:02.398792 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-20 04:15:02.398800 | orchestrator | Friday 20 February 2026 04:14:53 +0000 (0:00:03.165) 0:00:28.945 ******* 2026-02-20 04:15:02.398808 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:15:02.398817 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:15:02.398824 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:15:02.398832 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:15:02.398840 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:15:02.398846 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:15:02.398853 | orchestrator | 2026-02-20 04:15:02.398860 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-20 04:15:02.398867 | orchestrator | Friday 20 February 2026 04:14:56 +0000 (0:00:02.520) 0:00:31.466 ******* 2026-02-20 04:15:02.398873 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:15:02.398880 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:15:02.398887 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:15:02.398893 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:15:02.398900 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:15:02.398907 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:15:02.398914 | orchestrator | 2026-02-20 04:15:02.398921 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-20 04:15:02.398929 | orchestrator | Friday 20 February 2026 04:14:58 +0000 (0:00:02.047) 0:00:33.513 ******* 2026-02-20 04:15:02.398936 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:15:02.398946 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:15:02.398953 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:15:02.398960 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:15:02.398967 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:15:02.398974 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:15:02.398980 | orchestrator | 2026-02-20 04:15:02.398987 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-20 04:15:02.398994 | orchestrator | Friday 20 February 2026 04:14:59 +0000 (0:00:01.878) 0:00:35.392 ******* 2026-02-20 04:15:02.399008 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-20 04:15:02.399016 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-20 04:15:02.399022 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:15:02.399029 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-20 04:15:02.399036 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-20 04:15:02.399043 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:15:02.399049 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-20 04:15:02.399056 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-20 04:15:02.399063 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:15:02.399070 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-20 04:15:02.399076 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-20 04:15:02.399083 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:15:02.399090 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-20 04:15:02.399096 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-20 04:15:02.399103 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:15:02.399110 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-20 04:15:02.399117 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-20 04:15:02.399123 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:15:02.399130 | orchestrator | 2026-02-20 04:15:02.399137 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-20 04:15:02.399144 | orchestrator | Friday 20 February 2026 04:15:01 +0000 (0:00:02.032) 0:00:37.424 ******* 2026-02-20 04:15:02.399151 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:15:02.399158 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:15:02.399172 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:16:47.632695 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:16:47.632835 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:16:47.632852 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:16:47.632865 | orchestrator | 2026-02-20 04:16:47.632879 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-20 04:16:47.632892 | orchestrator | Friday 20 February 2026 04:15:03 +0000 (0:00:01.920) 0:00:39.345 ******* 2026-02-20 04:16:47.632903 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:16:47.632915 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:16:47.632926 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:16:47.632937 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:16:47.632948 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:16:47.632959 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:16:47.632971 | orchestrator | 2026-02-20 04:16:47.632982 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-20 04:16:47.632993 | orchestrator | 2026-02-20 04:16:47.633005 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-20 04:16:47.633017 | orchestrator | Friday 20 February 2026 04:15:06 +0000 (0:00:02.769) 0:00:42.114 ******* 2026-02-20 04:16:47.633029 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:16:47.633041 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:16:47.633070 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:16:47.633082 | orchestrator | 2026-02-20 04:16:47.633098 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-20 04:16:47.633110 | orchestrator | Friday 20 February 2026 04:15:08 +0000 (0:00:01.726) 0:00:43.841 ******* 2026-02-20 04:16:47.633121 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:16:47.633132 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:16:47.633143 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:16:47.633159 | orchestrator | 2026-02-20 04:16:47.633178 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-20 04:16:47.633194 | orchestrator | Friday 20 February 2026 04:15:10 +0000 (0:00:02.204) 0:00:46.045 ******* 2026-02-20 04:16:47.633264 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:16:47.633285 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:16:47.633304 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:16:47.633322 | orchestrator | 2026-02-20 04:16:47.633342 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-20 04:16:47.633360 | orchestrator | Friday 20 February 2026 04:15:12 +0000 (0:00:02.188) 0:00:48.234 ******* 2026-02-20 04:16:47.633378 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:16:47.633395 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:16:47.633412 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:16:47.633430 | orchestrator | 2026-02-20 04:16:47.633448 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-20 04:16:47.633488 | orchestrator | Friday 20 February 2026 04:15:14 +0000 (0:00:01.930) 0:00:50.164 ******* 2026-02-20 04:16:47.633544 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:16:47.633565 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:16:47.633584 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:16:47.633602 | orchestrator | 2026-02-20 04:16:47.633621 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-20 04:16:47.633641 | orchestrator | Friday 20 February 2026 04:15:16 +0000 (0:00:01.384) 0:00:51.549 ******* 2026-02-20 04:16:47.633661 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:16:47.633677 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:16:47.633688 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:16:47.633699 | orchestrator | 2026-02-20 04:16:47.633710 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-20 04:16:47.633721 | orchestrator | Friday 20 February 2026 04:15:17 +0000 (0:00:01.638) 0:00:53.187 ******* 2026-02-20 04:16:47.633732 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:16:47.633742 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:16:47.633753 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:16:47.633764 | orchestrator | 2026-02-20 04:16:47.633775 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-20 04:16:47.633786 | orchestrator | Friday 20 February 2026 04:15:19 +0000 (0:00:02.165) 0:00:55.352 ******* 2026-02-20 04:16:47.633797 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:16:47.633808 | orchestrator | 2026-02-20 04:16:47.633818 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-20 04:16:47.633829 | orchestrator | Friday 20 February 2026 04:15:21 +0000 (0:00:01.941) 0:00:57.294 ******* 2026-02-20 04:16:47.633840 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:16:47.633851 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:16:47.633862 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:16:47.633874 | orchestrator | 2026-02-20 04:16:47.633893 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-20 04:16:47.633913 | orchestrator | Friday 20 February 2026 04:15:24 +0000 (0:00:02.395) 0:00:59.690 ******* 2026-02-20 04:16:47.633933 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:16:47.633951 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:16:47.633970 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:16:47.633984 | orchestrator | 2026-02-20 04:16:47.633995 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-20 04:16:47.634006 | orchestrator | Friday 20 February 2026 04:15:25 +0000 (0:00:01.658) 0:01:01.348 ******* 2026-02-20 04:16:47.634079 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:16:47.634092 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:16:47.634103 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:16:47.634114 | orchestrator | 2026-02-20 04:16:47.634125 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-20 04:16:47.634136 | orchestrator | Friday 20 February 2026 04:15:27 +0000 (0:00:01.784) 0:01:03.133 ******* 2026-02-20 04:16:47.634147 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:16:47.634157 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:16:47.634168 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:16:47.634193 | orchestrator | 2026-02-20 04:16:47.634204 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-20 04:16:47.634215 | orchestrator | Friday 20 February 2026 04:15:30 +0000 (0:00:02.391) 0:01:05.524 ******* 2026-02-20 04:16:47.634226 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:16:47.634237 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:16:47.634269 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:16:47.634281 | orchestrator | 2026-02-20 04:16:47.634292 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-20 04:16:47.634303 | orchestrator | Friday 20 February 2026 04:15:31 +0000 (0:00:01.375) 0:01:06.900 ******* 2026-02-20 04:16:47.634313 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:16:47.634324 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:16:47.634335 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:16:47.634346 | orchestrator | 2026-02-20 04:16:47.634357 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-20 04:16:47.634368 | orchestrator | Friday 20 February 2026 04:15:32 +0000 (0:00:01.518) 0:01:08.418 ******* 2026-02-20 04:16:47.634379 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:16:47.634390 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:16:47.634400 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:16:47.634411 | orchestrator | 2026-02-20 04:16:47.634422 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-20 04:16:47.634433 | orchestrator | Friday 20 February 2026 04:15:35 +0000 (0:00:02.168) 0:01:10.586 ******* 2026-02-20 04:16:47.634444 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:16:47.634455 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:16:47.634466 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:16:47.634477 | orchestrator | 2026-02-20 04:16:47.634487 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-20 04:16:47.634520 | orchestrator | Friday 20 February 2026 04:15:37 +0000 (0:00:01.976) 0:01:12.563 ******* 2026-02-20 04:16:47.634531 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:16:47.634542 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:16:47.634553 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:16:47.634564 | orchestrator | 2026-02-20 04:16:47.634575 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-20 04:16:47.634586 | orchestrator | Friday 20 February 2026 04:15:38 +0000 (0:00:01.398) 0:01:13.961 ******* 2026-02-20 04:16:47.634597 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-20 04:16:47.634610 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-20 04:16:47.634621 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-20 04:16:47.634632 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-20 04:16:47.634642 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-20 04:16:47.634653 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-20 04:16:47.634664 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-20 04:16:47.634675 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-20 04:16:47.634685 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-20 04:16:47.634696 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:16:47.634716 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:16:47.634727 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:16:47.634738 | orchestrator | 2026-02-20 04:16:47.634749 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-20 04:16:47.634760 | orchestrator | Friday 20 February 2026 04:16:12 +0000 (0:00:34.191) 0:01:48.152 ******* 2026-02-20 04:16:47.634771 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:16:47.634782 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:16:47.634793 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:16:47.634804 | orchestrator | 2026-02-20 04:16:47.634814 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-20 04:16:47.634825 | orchestrator | Friday 20 February 2026 04:16:14 +0000 (0:00:01.359) 0:01:49.512 ******* 2026-02-20 04:16:47.634836 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:16:47.634847 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:16:47.634858 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:16:47.634869 | orchestrator | 2026-02-20 04:16:47.634879 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-20 04:16:47.634890 | orchestrator | Friday 20 February 2026 04:16:16 +0000 (0:00:02.160) 0:01:51.673 ******* 2026-02-20 04:16:47.634901 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:16:47.634912 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:16:47.634923 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:16:47.634933 | orchestrator | 2026-02-20 04:16:47.634944 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-20 04:16:47.634964 | orchestrator | Friday 20 February 2026 04:16:18 +0000 (0:00:02.313) 0:01:53.986 ******* 2026-02-20 04:16:47.634975 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:16:47.634986 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:16:47.634997 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:16:47.635008 | orchestrator | 2026-02-20 04:16:47.635019 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-20 04:16:47.635030 | orchestrator | Friday 20 February 2026 04:16:45 +0000 (0:00:27.429) 0:02:21.416 ******* 2026-02-20 04:16:47.635041 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:16:47.635052 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:16:47.635063 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:16:47.635074 | orchestrator | 2026-02-20 04:16:47.635085 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-20 04:16:47.635104 | orchestrator | Friday 20 February 2026 04:16:47 +0000 (0:00:01.662) 0:02:23.079 ******* 2026-02-20 04:17:36.619100 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:17:36.619236 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:17:36.619253 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:17:36.619266 | orchestrator | 2026-02-20 04:17:36.619279 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-20 04:17:36.619292 | orchestrator | Friday 20 February 2026 04:16:49 +0000 (0:00:01.824) 0:02:24.904 ******* 2026-02-20 04:17:36.619303 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:17:36.619315 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:17:36.619326 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:17:36.619337 | orchestrator | 2026-02-20 04:17:36.619348 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-20 04:17:36.619359 | orchestrator | Friday 20 February 2026 04:16:51 +0000 (0:00:02.003) 0:02:26.907 ******* 2026-02-20 04:17:36.619370 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:17:36.619381 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:17:36.619392 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:17:36.619403 | orchestrator | 2026-02-20 04:17:36.619414 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-20 04:17:36.619425 | orchestrator | Friday 20 February 2026 04:16:53 +0000 (0:00:01.651) 0:02:28.559 ******* 2026-02-20 04:17:36.619435 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:17:36.619446 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:17:36.619457 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:17:36.619524 | orchestrator | 2026-02-20 04:17:36.619560 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-20 04:17:36.619576 | orchestrator | Friday 20 February 2026 04:16:54 +0000 (0:00:01.362) 0:02:29.921 ******* 2026-02-20 04:17:36.619608 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:17:36.619626 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:17:36.619644 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:17:36.619661 | orchestrator | 2026-02-20 04:17:36.619679 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-20 04:17:36.619697 | orchestrator | Friday 20 February 2026 04:16:56 +0000 (0:00:01.727) 0:02:31.649 ******* 2026-02-20 04:17:36.619714 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:17:36.619732 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:17:36.619750 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:17:36.619767 | orchestrator | 2026-02-20 04:17:36.619786 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-20 04:17:36.619802 | orchestrator | Friday 20 February 2026 04:16:58 +0000 (0:00:01.969) 0:02:33.619 ******* 2026-02-20 04:17:36.619821 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:17:36.619840 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:17:36.619859 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:17:36.619877 | orchestrator | 2026-02-20 04:17:36.619896 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-20 04:17:36.619915 | orchestrator | Friday 20 February 2026 04:17:00 +0000 (0:00:01.878) 0:02:35.498 ******* 2026-02-20 04:17:36.619933 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:17:36.619952 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:17:36.619971 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:17:36.619988 | orchestrator | 2026-02-20 04:17:36.620006 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-20 04:17:36.620024 | orchestrator | Friday 20 February 2026 04:17:01 +0000 (0:00:01.919) 0:02:37.417 ******* 2026-02-20 04:17:36.620043 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:17:36.620061 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:17:36.620079 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:17:36.620097 | orchestrator | 2026-02-20 04:17:36.620115 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-20 04:17:36.620134 | orchestrator | Friday 20 February 2026 04:17:03 +0000 (0:00:01.349) 0:02:38.767 ******* 2026-02-20 04:17:36.620154 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:17:36.620172 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:17:36.620191 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:17:36.620210 | orchestrator | 2026-02-20 04:17:36.620229 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-20 04:17:36.620248 | orchestrator | Friday 20 February 2026 04:17:04 +0000 (0:00:01.287) 0:02:40.054 ******* 2026-02-20 04:17:36.620266 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:17:36.620284 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:17:36.620301 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:17:36.620313 | orchestrator | 2026-02-20 04:17:36.620324 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-20 04:17:36.620335 | orchestrator | Friday 20 February 2026 04:17:06 +0000 (0:00:01.694) 0:02:41.748 ******* 2026-02-20 04:17:36.620346 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:17:36.620356 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:17:36.620367 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:17:36.620378 | orchestrator | 2026-02-20 04:17:36.620390 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-20 04:17:36.620402 | orchestrator | Friday 20 February 2026 04:17:07 +0000 (0:00:01.678) 0:02:43.427 ******* 2026-02-20 04:17:36.620414 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-20 04:17:36.620425 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-20 04:17:36.620451 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-20 04:17:36.620463 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-20 04:17:36.620473 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-20 04:17:36.620484 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-20 04:17:36.620564 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-20 04:17:36.620576 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-20 04:17:36.620610 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-20 04:17:36.620622 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-20 04:17:36.620634 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-20 04:17:36.620645 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-20 04:17:36.620656 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-20 04:17:36.620666 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-20 04:17:36.620677 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-20 04:17:36.620688 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-20 04:17:36.620707 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-20 04:17:36.620725 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-20 04:17:36.620743 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-20 04:17:36.620762 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-20 04:17:36.620781 | orchestrator | 2026-02-20 04:17:36.620800 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-20 04:17:36.620818 | orchestrator | 2026-02-20 04:17:36.620837 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-20 04:17:36.620852 | orchestrator | Friday 20 February 2026 04:17:12 +0000 (0:00:04.326) 0:02:47.753 ******* 2026-02-20 04:17:36.620863 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:17:36.620874 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:17:36.620884 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:17:36.620895 | orchestrator | 2026-02-20 04:17:36.620906 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-20 04:17:36.620917 | orchestrator | Friday 20 February 2026 04:17:13 +0000 (0:00:01.319) 0:02:49.073 ******* 2026-02-20 04:17:36.620928 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:17:36.620939 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:17:36.620949 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:17:36.620960 | orchestrator | 2026-02-20 04:17:36.620971 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-20 04:17:36.620982 | orchestrator | Friday 20 February 2026 04:17:15 +0000 (0:00:01.675) 0:02:50.749 ******* 2026-02-20 04:17:36.620993 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:17:36.621003 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:17:36.621014 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:17:36.621025 | orchestrator | 2026-02-20 04:17:36.621036 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-20 04:17:36.621046 | orchestrator | Friday 20 February 2026 04:17:16 +0000 (0:00:01.575) 0:02:52.324 ******* 2026-02-20 04:17:36.621057 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 04:17:36.621078 | orchestrator | 2026-02-20 04:17:36.621089 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-20 04:17:36.621100 | orchestrator | Friday 20 February 2026 04:17:18 +0000 (0:00:01.652) 0:02:53.976 ******* 2026-02-20 04:17:36.621110 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:17:36.621121 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:17:36.621132 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:17:36.621143 | orchestrator | 2026-02-20 04:17:36.621154 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-20 04:17:36.621164 | orchestrator | Friday 20 February 2026 04:17:19 +0000 (0:00:01.326) 0:02:55.303 ******* 2026-02-20 04:17:36.621175 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:17:36.621186 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:17:36.621197 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:17:36.621208 | orchestrator | 2026-02-20 04:17:36.621218 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-20 04:17:36.621229 | orchestrator | Friday 20 February 2026 04:17:21 +0000 (0:00:01.395) 0:02:56.698 ******* 2026-02-20 04:17:36.621252 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:17:36.621263 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:17:36.621274 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:17:36.621285 | orchestrator | 2026-02-20 04:17:36.621297 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-20 04:17:36.621315 | orchestrator | Friday 20 February 2026 04:17:22 +0000 (0:00:01.368) 0:02:58.067 ******* 2026-02-20 04:17:36.621333 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:17:36.621351 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:17:36.621369 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:17:36.621387 | orchestrator | 2026-02-20 04:17:36.621404 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-20 04:17:36.621423 | orchestrator | Friday 20 February 2026 04:17:24 +0000 (0:00:01.690) 0:02:59.757 ******* 2026-02-20 04:17:36.621434 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:17:36.621445 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:17:36.621456 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:17:36.621467 | orchestrator | 2026-02-20 04:17:36.621478 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-20 04:17:36.621550 | orchestrator | Friday 20 February 2026 04:17:26 +0000 (0:00:02.150) 0:03:01.908 ******* 2026-02-20 04:17:36.621565 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:17:36.621583 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:17:36.621602 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:17:36.621622 | orchestrator | 2026-02-20 04:17:36.621641 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-20 04:17:36.621662 | orchestrator | Friday 20 February 2026 04:17:28 +0000 (0:00:02.294) 0:03:04.202 ******* 2026-02-20 04:17:36.621686 | orchestrator | changed: [testbed-node-3] 2026-02-20 04:18:42.022198 | orchestrator | changed: [testbed-node-4] 2026-02-20 04:18:42.022320 | orchestrator | changed: [testbed-node-5] 2026-02-20 04:18:42.022335 | orchestrator | 2026-02-20 04:18:42.022348 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-20 04:18:42.022361 | orchestrator | 2026-02-20 04:18:42.022373 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-20 04:18:42.022384 | orchestrator | Friday 20 February 2026 04:17:36 +0000 (0:00:07.870) 0:03:12.073 ******* 2026-02-20 04:18:42.022395 | orchestrator | ok: [testbed-manager] 2026-02-20 04:18:42.022407 | orchestrator | 2026-02-20 04:18:42.022418 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-20 04:18:42.022430 | orchestrator | Friday 20 February 2026 04:17:38 +0000 (0:00:02.143) 0:03:14.216 ******* 2026-02-20 04:18:42.022440 | orchestrator | ok: [testbed-manager] 2026-02-20 04:18:42.022451 | orchestrator | 2026-02-20 04:18:42.022462 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-20 04:18:42.022568 | orchestrator | Friday 20 February 2026 04:17:40 +0000 (0:00:01.443) 0:03:15.660 ******* 2026-02-20 04:18:42.022587 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-20 04:18:42.022606 | orchestrator | 2026-02-20 04:18:42.022624 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-20 04:18:42.022660 | orchestrator | Friday 20 February 2026 04:17:41 +0000 (0:00:01.516) 0:03:17.177 ******* 2026-02-20 04:18:42.022681 | orchestrator | changed: [testbed-manager] 2026-02-20 04:18:42.022699 | orchestrator | 2026-02-20 04:18:42.022718 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-20 04:18:42.022737 | orchestrator | Friday 20 February 2026 04:17:43 +0000 (0:00:01.840) 0:03:19.018 ******* 2026-02-20 04:18:42.022756 | orchestrator | changed: [testbed-manager] 2026-02-20 04:18:42.022775 | orchestrator | 2026-02-20 04:18:42.022792 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-20 04:18:42.022811 | orchestrator | Friday 20 February 2026 04:17:45 +0000 (0:00:01.510) 0:03:20.529 ******* 2026-02-20 04:18:42.022832 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-20 04:18:42.022852 | orchestrator | 2026-02-20 04:18:42.022872 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-20 04:18:42.022890 | orchestrator | Friday 20 February 2026 04:17:47 +0000 (0:00:02.856) 0:03:23.385 ******* 2026-02-20 04:18:42.022908 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-20 04:18:42.022927 | orchestrator | 2026-02-20 04:18:42.022947 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-20 04:18:42.022967 | orchestrator | Friday 20 February 2026 04:17:49 +0000 (0:00:01.785) 0:03:25.171 ******* 2026-02-20 04:18:42.022985 | orchestrator | ok: [testbed-manager] 2026-02-20 04:18:42.023003 | orchestrator | 2026-02-20 04:18:42.023055 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-20 04:18:42.023069 | orchestrator | Friday 20 February 2026 04:17:51 +0000 (0:00:01.406) 0:03:26.578 ******* 2026-02-20 04:18:42.023080 | orchestrator | ok: [testbed-manager] 2026-02-20 04:18:42.023091 | orchestrator | 2026-02-20 04:18:42.023102 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-20 04:18:42.023113 | orchestrator | 2026-02-20 04:18:42.023124 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-20 04:18:42.023135 | orchestrator | Friday 20 February 2026 04:17:52 +0000 (0:00:01.538) 0:03:28.117 ******* 2026-02-20 04:18:42.023146 | orchestrator | ok: [testbed-manager] 2026-02-20 04:18:42.023157 | orchestrator | 2026-02-20 04:18:42.023168 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-20 04:18:42.023179 | orchestrator | Friday 20 February 2026 04:17:53 +0000 (0:00:01.128) 0:03:29.245 ******* 2026-02-20 04:18:42.023190 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-20 04:18:42.023201 | orchestrator | 2026-02-20 04:18:42.023212 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-20 04:18:42.023223 | orchestrator | Friday 20 February 2026 04:17:55 +0000 (0:00:01.529) 0:03:30.775 ******* 2026-02-20 04:18:42.023234 | orchestrator | ok: [testbed-manager] 2026-02-20 04:18:42.023245 | orchestrator | 2026-02-20 04:18:42.023256 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-20 04:18:42.023267 | orchestrator | Friday 20 February 2026 04:17:57 +0000 (0:00:01.888) 0:03:32.664 ******* 2026-02-20 04:18:42.023278 | orchestrator | ok: [testbed-manager] 2026-02-20 04:18:42.023288 | orchestrator | 2026-02-20 04:18:42.023299 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-20 04:18:42.023310 | orchestrator | Friday 20 February 2026 04:17:59 +0000 (0:00:02.645) 0:03:35.309 ******* 2026-02-20 04:18:42.023321 | orchestrator | ok: [testbed-manager] 2026-02-20 04:18:42.023332 | orchestrator | 2026-02-20 04:18:42.023343 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-20 04:18:42.023354 | orchestrator | Friday 20 February 2026 04:18:01 +0000 (0:00:01.437) 0:03:36.747 ******* 2026-02-20 04:18:42.023378 | orchestrator | ok: [testbed-manager] 2026-02-20 04:18:42.023389 | orchestrator | 2026-02-20 04:18:42.023400 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-20 04:18:42.023411 | orchestrator | Friday 20 February 2026 04:18:02 +0000 (0:00:01.411) 0:03:38.158 ******* 2026-02-20 04:18:42.023422 | orchestrator | ok: [testbed-manager] 2026-02-20 04:18:42.023432 | orchestrator | 2026-02-20 04:18:42.023443 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-20 04:18:42.023454 | orchestrator | Friday 20 February 2026 04:18:04 +0000 (0:00:01.571) 0:03:39.730 ******* 2026-02-20 04:18:42.023465 | orchestrator | ok: [testbed-manager] 2026-02-20 04:18:42.023476 | orchestrator | 2026-02-20 04:18:42.023518 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-20 04:18:42.023538 | orchestrator | Friday 20 February 2026 04:18:06 +0000 (0:00:02.431) 0:03:42.161 ******* 2026-02-20 04:18:42.023558 | orchestrator | ok: [testbed-manager] 2026-02-20 04:18:42.023576 | orchestrator | 2026-02-20 04:18:42.023594 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-20 04:18:42.023605 | orchestrator | 2026-02-20 04:18:42.023617 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-20 04:18:42.023649 | orchestrator | Friday 20 February 2026 04:18:08 +0000 (0:00:01.634) 0:03:43.795 ******* 2026-02-20 04:18:42.023661 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:18:42.023672 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:18:42.023682 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:18:42.023693 | orchestrator | 2026-02-20 04:18:42.023704 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-20 04:18:42.023715 | orchestrator | Friday 20 February 2026 04:18:09 +0000 (0:00:01.303) 0:03:45.099 ******* 2026-02-20 04:18:42.023726 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:18:42.023737 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:18:42.023748 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:18:42.023759 | orchestrator | 2026-02-20 04:18:42.023769 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-20 04:18:42.023785 | orchestrator | Friday 20 February 2026 04:18:11 +0000 (0:00:01.551) 0:03:46.650 ******* 2026-02-20 04:18:42.023803 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:18:42.023822 | orchestrator | 2026-02-20 04:18:42.023842 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-20 04:18:42.023860 | orchestrator | Friday 20 February 2026 04:18:12 +0000 (0:00:01.723) 0:03:48.374 ******* 2026-02-20 04:18:42.023880 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-20 04:18:42.023892 | orchestrator | 2026-02-20 04:18:42.023904 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-20 04:18:42.023915 | orchestrator | Friday 20 February 2026 04:18:14 +0000 (0:00:01.838) 0:03:50.213 ******* 2026-02-20 04:18:42.023925 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 04:18:42.023936 | orchestrator | 2026-02-20 04:18:42.023947 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-20 04:18:42.023958 | orchestrator | Friday 20 February 2026 04:18:16 +0000 (0:00:01.842) 0:03:52.056 ******* 2026-02-20 04:18:42.023969 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:18:42.023980 | orchestrator | 2026-02-20 04:18:42.023990 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-20 04:18:42.024001 | orchestrator | Friday 20 February 2026 04:18:17 +0000 (0:00:01.119) 0:03:53.175 ******* 2026-02-20 04:18:42.024012 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 04:18:42.024023 | orchestrator | 2026-02-20 04:18:42.024034 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-20 04:18:42.024045 | orchestrator | Friday 20 February 2026 04:18:19 +0000 (0:00:01.981) 0:03:55.156 ******* 2026-02-20 04:18:42.024056 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 04:18:42.024076 | orchestrator | 2026-02-20 04:18:42.024087 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-20 04:18:42.024098 | orchestrator | Friday 20 February 2026 04:18:21 +0000 (0:00:02.236) 0:03:57.393 ******* 2026-02-20 04:18:42.024108 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 04:18:42.024119 | orchestrator | 2026-02-20 04:18:42.024130 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-20 04:18:42.024141 | orchestrator | Friday 20 February 2026 04:18:23 +0000 (0:00:01.147) 0:03:58.540 ******* 2026-02-20 04:18:42.024152 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 04:18:42.024163 | orchestrator | 2026-02-20 04:18:42.024174 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-20 04:18:42.024185 | orchestrator | Friday 20 February 2026 04:18:24 +0000 (0:00:01.147) 0:03:59.688 ******* 2026-02-20 04:18:42.024196 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-02-20 04:18:42.024207 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-02-20 04:18:42.024219 | orchestrator | } 2026-02-20 04:18:42.024230 | orchestrator | 2026-02-20 04:18:42.024241 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-20 04:18:42.024252 | orchestrator | Friday 20 February 2026 04:18:25 +0000 (0:00:01.176) 0:04:00.865 ******* 2026-02-20 04:18:42.024263 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:18:42.024273 | orchestrator | 2026-02-20 04:18:42.024284 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-20 04:18:42.024295 | orchestrator | Friday 20 February 2026 04:18:26 +0000 (0:00:01.108) 0:04:01.974 ******* 2026-02-20 04:18:42.024306 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-20 04:18:42.024317 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-20 04:18:42.024327 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-20 04:18:42.024338 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-20 04:18:42.024349 | orchestrator | 2026-02-20 04:18:42.024360 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-20 04:18:42.024371 | orchestrator | Friday 20 February 2026 04:18:31 +0000 (0:00:05.327) 0:04:07.302 ******* 2026-02-20 04:18:42.024381 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-20 04:18:42.024392 | orchestrator | 2026-02-20 04:18:42.024417 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-20 04:18:42.024436 | orchestrator | Friday 20 February 2026 04:18:34 +0000 (0:00:02.322) 0:04:09.624 ******* 2026-02-20 04:18:42.024452 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-20 04:18:42.024471 | orchestrator | 2026-02-20 04:18:42.024512 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-20 04:18:42.024533 | orchestrator | Friday 20 February 2026 04:18:36 +0000 (0:00:02.569) 0:04:12.194 ******* 2026-02-20 04:18:42.024552 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-20 04:18:42.024571 | orchestrator | 2026-02-20 04:18:42.024588 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-20 04:18:42.024600 | orchestrator | Friday 20 February 2026 04:18:40 +0000 (0:00:04.164) 0:04:16.359 ******* 2026-02-20 04:18:42.024611 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:18:42.024622 | orchestrator | 2026-02-20 04:18:42.024642 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-20 04:19:11.166295 | orchestrator | Friday 20 February 2026 04:18:42 +0000 (0:00:01.115) 0:04:17.475 ******* 2026-02-20 04:19:11.166391 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-20 04:19:11.166403 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-20 04:19:11.166412 | orchestrator | 2026-02-20 04:19:11.166421 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-20 04:19:11.166448 | orchestrator | Friday 20 February 2026 04:18:44 +0000 (0:00:02.753) 0:04:20.229 ******* 2026-02-20 04:19:11.166457 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:19:11.166466 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:19:11.166473 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:19:11.166529 | orchestrator | 2026-02-20 04:19:11.166537 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-20 04:19:11.166544 | orchestrator | Friday 20 February 2026 04:18:46 +0000 (0:00:01.356) 0:04:21.585 ******* 2026-02-20 04:19:11.166552 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:19:11.166560 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:19:11.166568 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:19:11.166581 | orchestrator | 2026-02-20 04:19:11.166610 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-20 04:19:11.166628 | orchestrator | 2026-02-20 04:19:11.166641 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-20 04:19:11.166652 | orchestrator | Friday 20 February 2026 04:18:48 +0000 (0:00:02.107) 0:04:23.692 ******* 2026-02-20 04:19:11.166665 | orchestrator | ok: [testbed-manager] 2026-02-20 04:19:11.166677 | orchestrator | 2026-02-20 04:19:11.166688 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-20 04:19:11.166700 | orchestrator | Friday 20 February 2026 04:18:49 +0000 (0:00:01.135) 0:04:24.828 ******* 2026-02-20 04:19:11.166712 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-20 04:19:11.166723 | orchestrator | 2026-02-20 04:19:11.166736 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-20 04:19:11.166748 | orchestrator | Friday 20 February 2026 04:18:50 +0000 (0:00:01.509) 0:04:26.337 ******* 2026-02-20 04:19:11.166759 | orchestrator | ok: [testbed-manager] 2026-02-20 04:19:11.166772 | orchestrator | 2026-02-20 04:19:11.166784 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-20 04:19:11.166797 | orchestrator | 2026-02-20 04:19:11.166810 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-20 04:19:11.166823 | orchestrator | Friday 20 February 2026 04:18:56 +0000 (0:00:05.259) 0:04:31.597 ******* 2026-02-20 04:19:11.166835 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:19:11.166847 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:19:11.166860 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:19:11.166873 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:19:11.166886 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:19:11.166899 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:19:11.166911 | orchestrator | 2026-02-20 04:19:11.166937 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-20 04:19:11.166951 | orchestrator | Friday 20 February 2026 04:18:57 +0000 (0:00:01.832) 0:04:33.430 ******* 2026-02-20 04:19:11.166974 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-20 04:19:11.166989 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-20 04:19:11.167002 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-20 04:19:11.167015 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-20 04:19:11.167028 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-20 04:19:11.167040 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-20 04:19:11.167053 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-20 04:19:11.167065 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-20 04:19:11.167078 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-20 04:19:11.167091 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-20 04:19:11.167115 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-20 04:19:11.167128 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-20 04:19:11.167141 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-20 04:19:11.167152 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-20 04:19:11.167164 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-20 04:19:11.167177 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-20 04:19:11.167189 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-20 04:19:11.167201 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-20 04:19:11.167214 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-20 04:19:11.167227 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-20 04:19:11.167239 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-20 04:19:11.167272 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-20 04:19:11.167284 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-20 04:19:11.167297 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-20 04:19:11.167306 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-20 04:19:11.167314 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-20 04:19:11.167321 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-20 04:19:11.167328 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-20 04:19:11.167335 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-20 04:19:11.167343 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-20 04:19:11.167350 | orchestrator | 2026-02-20 04:19:11.167359 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-20 04:19:11.167367 | orchestrator | Friday 20 February 2026 04:19:06 +0000 (0:00:08.453) 0:04:41.883 ******* 2026-02-20 04:19:11.167380 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:19:11.167397 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:19:11.167411 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:19:11.167424 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:19:11.167436 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:19:11.167447 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:19:11.167459 | orchestrator | 2026-02-20 04:19:11.167469 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-20 04:19:11.167499 | orchestrator | Friday 20 February 2026 04:19:08 +0000 (0:00:01.832) 0:04:43.715 ******* 2026-02-20 04:19:11.167512 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:19:11.167523 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:19:11.167534 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:19:11.167546 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:19:11.167558 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:19:11.167569 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:19:11.167582 | orchestrator | 2026-02-20 04:19:11.167595 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:19:11.167608 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 04:19:11.167623 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-20 04:19:11.167652 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-20 04:19:11.167668 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-20 04:19:11.167680 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-20 04:19:11.167693 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-20 04:19:11.167705 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-20 04:19:11.167717 | orchestrator | 2026-02-20 04:19:11.167729 | orchestrator | 2026-02-20 04:19:11.167742 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 04:19:11.167755 | orchestrator | Friday 20 February 2026 04:19:11 +0000 (0:00:02.888) 0:04:46.604 ******* 2026-02-20 04:19:11.167766 | orchestrator | =============================================================================== 2026-02-20 04:19:11.167778 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 34.19s 2026-02-20 04:19:11.167791 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.43s 2026-02-20 04:19:11.167799 | orchestrator | Manage labels ----------------------------------------------------------- 8.45s 2026-02-20 04:19:11.167807 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 7.87s 2026-02-20 04:19:11.167814 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.33s 2026-02-20 04:19:11.167821 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.26s 2026-02-20 04:19:11.167829 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.33s 2026-02-20 04:19:11.167837 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.16s 2026-02-20 04:19:11.167844 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 3.61s 2026-02-20 04:19:11.167852 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 3.16s 2026-02-20 04:19:11.167860 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.16s 2026-02-20 04:19:11.167867 | orchestrator | Manage taints ----------------------------------------------------------- 2.89s 2026-02-20 04:19:11.167883 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.86s 2026-02-20 04:19:11.428757 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.77s 2026-02-20 04:19:11.428858 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.75s 2026-02-20 04:19:11.428874 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.65s 2026-02-20 04:19:11.428886 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.57s 2026-02-20 04:19:11.428898 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.52s 2026-02-20 04:19:11.428909 | orchestrator | kubectl : Install required packages ------------------------------------- 2.43s 2026-02-20 04:19:11.428920 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.40s 2026-02-20 04:19:11.623446 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-20 04:19:11.623609 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-02-20 04:19:11.630628 | orchestrator | + set -e 2026-02-20 04:19:11.630711 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-20 04:19:11.630725 | orchestrator | ++ export INTERACTIVE=false 2026-02-20 04:19:11.630737 | orchestrator | ++ INTERACTIVE=false 2026-02-20 04:19:11.630792 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-20 04:19:11.630809 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-20 04:19:11.630821 | orchestrator | + osism apply openstackclient 2026-02-20 04:19:23.556567 | orchestrator | 2026-02-20 04:19:23 | INFO  | Task 68b8e65e-dd6d-4644-94cd-9afd9e79550a (openstackclient) was prepared for execution. 2026-02-20 04:19:23.556678 | orchestrator | 2026-02-20 04:19:23 | INFO  | It takes a moment until task 68b8e65e-dd6d-4644-94cd-9afd9e79550a (openstackclient) has been started and output is visible here. 2026-02-20 04:19:58.197693 | orchestrator | 2026-02-20 04:19:58.197822 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-20 04:19:58.197841 | orchestrator | 2026-02-20 04:19:58.197854 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-20 04:19:58.197866 | orchestrator | Friday 20 February 2026 04:19:31 +0000 (0:00:03.066) 0:00:03.066 ******* 2026-02-20 04:19:58.197878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-20 04:19:58.197890 | orchestrator | 2026-02-20 04:19:58.197902 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-20 04:19:58.197913 | orchestrator | Friday 20 February 2026 04:19:32 +0000 (0:00:01.784) 0:00:04.850 ******* 2026-02-20 04:19:58.197924 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-20 04:19:58.197938 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-20 04:19:58.197950 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-20 04:19:58.197961 | orchestrator | 2026-02-20 04:19:58.197972 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-20 04:19:58.197984 | orchestrator | Friday 20 February 2026 04:19:34 +0000 (0:00:01.939) 0:00:06.790 ******* 2026-02-20 04:19:58.197995 | orchestrator | changed: [testbed-manager] 2026-02-20 04:19:58.198007 | orchestrator | 2026-02-20 04:19:58.198076 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-20 04:19:58.198091 | orchestrator | Friday 20 February 2026 04:19:36 +0000 (0:00:02.021) 0:00:08.811 ******* 2026-02-20 04:19:58.198102 | orchestrator | ok: [testbed-manager] 2026-02-20 04:19:58.198114 | orchestrator | 2026-02-20 04:19:58.198126 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-20 04:19:58.198145 | orchestrator | Friday 20 February 2026 04:19:38 +0000 (0:00:01.873) 0:00:10.685 ******* 2026-02-20 04:19:58.198168 | orchestrator | ok: [testbed-manager] 2026-02-20 04:19:58.198194 | orchestrator | 2026-02-20 04:19:58.198212 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-20 04:19:58.198230 | orchestrator | Friday 20 February 2026 04:19:40 +0000 (0:00:01.758) 0:00:12.443 ******* 2026-02-20 04:19:58.198249 | orchestrator | ok: [testbed-manager] 2026-02-20 04:19:58.198267 | orchestrator | 2026-02-20 04:19:58.198287 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-20 04:19:58.198305 | orchestrator | Friday 20 February 2026 04:19:41 +0000 (0:00:01.480) 0:00:13.924 ******* 2026-02-20 04:19:58.198325 | orchestrator | changed: [testbed-manager] 2026-02-20 04:19:58.198343 | orchestrator | 2026-02-20 04:19:58.198356 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-20 04:19:58.198369 | orchestrator | Friday 20 February 2026 04:19:52 +0000 (0:00:10.477) 0:00:24.401 ******* 2026-02-20 04:19:58.198382 | orchestrator | changed: [testbed-manager] 2026-02-20 04:19:58.198395 | orchestrator | 2026-02-20 04:19:58.198408 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-20 04:19:58.198420 | orchestrator | Friday 20 February 2026 04:19:54 +0000 (0:00:01.981) 0:00:26.383 ******* 2026-02-20 04:19:58.198433 | orchestrator | changed: [testbed-manager] 2026-02-20 04:19:58.198447 | orchestrator | 2026-02-20 04:19:58.198460 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-20 04:19:58.198473 | orchestrator | Friday 20 February 2026 04:19:56 +0000 (0:00:01.636) 0:00:28.019 ******* 2026-02-20 04:19:58.198541 | orchestrator | ok: [testbed-manager] 2026-02-20 04:19:58.198554 | orchestrator | 2026-02-20 04:19:58.198567 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:19:58.198580 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-20 04:19:58.198594 | orchestrator | 2026-02-20 04:19:58.198605 | orchestrator | 2026-02-20 04:19:58.198616 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 04:19:58.198627 | orchestrator | Friday 20 February 2026 04:19:57 +0000 (0:00:01.792) 0:00:29.812 ******* 2026-02-20 04:19:58.198638 | orchestrator | =============================================================================== 2026-02-20 04:19:58.198649 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 10.48s 2026-02-20 04:19:58.198660 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.02s 2026-02-20 04:19:58.198670 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.98s 2026-02-20 04:19:58.198681 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.94s 2026-02-20 04:19:58.198692 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 1.87s 2026-02-20 04:19:58.198703 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.79s 2026-02-20 04:19:58.198713 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.78s 2026-02-20 04:19:58.198724 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.76s 2026-02-20 04:19:58.198735 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.64s 2026-02-20 04:19:58.198746 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.48s 2026-02-20 04:19:58.490675 | orchestrator | + osism apply -a upgrade common 2026-02-20 04:20:00.530567 | orchestrator | 2026-02-20 04:20:00 | INFO  | Task a881e01b-ad9b-448b-9f59-91b2b9c6c0ae (common) was prepared for execution. 2026-02-20 04:20:00.530659 | orchestrator | 2026-02-20 04:20:00 | INFO  | It takes a moment until task a881e01b-ad9b-448b-9f59-91b2b9c6c0ae (common) has been started and output is visible here. 2026-02-20 04:20:17.738937 | orchestrator | 2026-02-20 04:20:17.739048 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-20 04:20:17.739063 | orchestrator | 2026-02-20 04:20:17.739074 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-20 04:20:17.739084 | orchestrator | Friday 20 February 2026 04:20:06 +0000 (0:00:02.017) 0:00:02.017 ******* 2026-02-20 04:20:17.739095 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 04:20:17.739105 | orchestrator | 2026-02-20 04:20:17.739116 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-20 04:20:17.739126 | orchestrator | Friday 20 February 2026 04:20:09 +0000 (0:00:02.938) 0:00:04.956 ******* 2026-02-20 04:20:17.739137 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 04:20:17.739147 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 04:20:17.739157 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 04:20:17.739167 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 04:20:17.739177 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 04:20:17.739187 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 04:20:17.739197 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 04:20:17.739207 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 04:20:17.739239 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 04:20:17.739250 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 04:20:17.739260 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 04:20:17.739270 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 04:20:17.739279 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 04:20:17.739289 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 04:20:17.739299 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 04:20:17.739309 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 04:20:17.739318 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 04:20:17.739328 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 04:20:17.739338 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 04:20:17.739347 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 04:20:17.739357 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 04:20:17.739367 | orchestrator | 2026-02-20 04:20:17.739376 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-20 04:20:17.739386 | orchestrator | Friday 20 February 2026 04:20:12 +0000 (0:00:03.498) 0:00:08.454 ******* 2026-02-20 04:20:17.739396 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 04:20:17.739433 | orchestrator | 2026-02-20 04:20:17.739443 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-20 04:20:17.739453 | orchestrator | Friday 20 February 2026 04:20:15 +0000 (0:00:02.584) 0:00:11.039 ******* 2026-02-20 04:20:17.739470 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:20:17.739580 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:20:17.739620 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:20:17.739638 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:20:17.739680 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:20:17.739699 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:20:17.739882 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:20:17.739898 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:17.739910 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:17.739937 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:20.937956 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:20.938126 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:20.938141 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:20.938163 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:20.938182 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:20.938195 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:20.938204 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:20.938233 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:20.938255 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:20.938274 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:20.938297 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:20.938311 | orchestrator | 2026-02-20 04:20:20.938326 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-20 04:20:20.938342 | orchestrator | Friday 20 February 2026 04:20:19 +0000 (0:00:04.592) 0:00:15.632 ******* 2026-02-20 04:20:20.938368 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:20:20.938384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:20:20.938399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:20.938414 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:20.938463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:23.157270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:20:23.157411 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:20:23.157441 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:23.157464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:20:23.157667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:23.157701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:23.157721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:23.157769 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:20:23.157788 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:20:23.157810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:20:23.157862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:20:23.157887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:23.157912 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:20:23.157935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:23.157967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:23.157990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:23.158013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:23.158138 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:20:23.158163 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:20:23.158185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:20:23.158226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:24.657772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:24.657868 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:20:24.657880 | orchestrator | 2026-02-20 04:20:24.657917 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-20 04:20:24.657926 | orchestrator | Friday 20 February 2026 04:20:23 +0000 (0:00:03.326) 0:00:18.959 ******* 2026-02-20 04:20:24.657934 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:20:24.657945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:20:24.657965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:24.657973 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:24.657996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:24.658003 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:20:24.658010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:20:24.658099 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:24.658110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:20:24.658117 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:20:24.658124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:24.658131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:24.658138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:24.658151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:20:24.658158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:24.658174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:38.221619 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:20:38.221698 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:20:38.221705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:38.221712 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:20:38.221717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:20:38.221734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:38.221739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:20:38.221758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:38.221762 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:20:38.221766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:38.221771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:20:38.221776 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:20:38.221782 | orchestrator | 2026-02-20 04:20:38.221789 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-20 04:20:38.221797 | orchestrator | Friday 20 February 2026 04:20:26 +0000 (0:00:03.543) 0:00:22.503 ******* 2026-02-20 04:20:38.221803 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:20:38.221808 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:20:38.221831 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:20:38.221839 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:20:38.221845 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:20:38.221851 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:20:38.221857 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:20:38.221863 | orchestrator | 2026-02-20 04:20:38.221869 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-20 04:20:38.221875 | orchestrator | Friday 20 February 2026 04:20:28 +0000 (0:00:02.054) 0:00:24.557 ******* 2026-02-20 04:20:38.221881 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:20:38.221887 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:20:38.221892 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:20:38.221898 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:20:38.221903 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:20:38.221909 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:20:38.221914 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:20:38.221919 | orchestrator | 2026-02-20 04:20:38.221924 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-20 04:20:38.221931 | orchestrator | Friday 20 February 2026 04:20:30 +0000 (0:00:02.066) 0:00:26.624 ******* 2026-02-20 04:20:38.221937 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:20:38.221943 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:20:38.221948 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:20:38.221954 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:20:38.221967 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:20:38.221973 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:20:38.221980 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:20:38.221986 | orchestrator | 2026-02-20 04:20:38.221992 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-20 04:20:38.221998 | orchestrator | Friday 20 February 2026 04:20:32 +0000 (0:00:01.806) 0:00:28.431 ******* 2026-02-20 04:20:38.222004 | orchestrator | changed: [testbed-manager] 2026-02-20 04:20:38.222010 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:20:38.222054 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:20:38.222058 | orchestrator | changed: [testbed-node-3] 2026-02-20 04:20:38.222062 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:20:38.222066 | orchestrator | changed: [testbed-node-4] 2026-02-20 04:20:38.222069 | orchestrator | changed: [testbed-node-5] 2026-02-20 04:20:38.222073 | orchestrator | 2026-02-20 04:20:38.222077 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-20 04:20:38.222081 | orchestrator | Friday 20 February 2026 04:20:35 +0000 (0:00:02.782) 0:00:31.213 ******* 2026-02-20 04:20:38.222090 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:20:38.222095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:20:38.222099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:20:38.222123 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:38.222139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:20:40.154131 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:20:40.154202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:20:40.154218 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:20:40.154224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:40.154230 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:40.154236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:40.154240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:40.154268 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:40.154273 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:40.154278 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:40.154282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:40.154286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:40.154290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:40.154298 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:40.154302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:20:40.154313 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:01.363907 | orchestrator | 2026-02-20 04:21:01.364054 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-20 04:21:01.364080 | orchestrator | Friday 20 February 2026 04:20:40 +0000 (0:00:04.756) 0:00:35.970 ******* 2026-02-20 04:21:01.364099 | orchestrator | [WARNING]: Skipped 2026-02-20 04:21:01.364118 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-20 04:21:01.364139 | orchestrator | to this access issue: 2026-02-20 04:21:01.364158 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-20 04:21:01.364177 | orchestrator | directory 2026-02-20 04:21:01.364196 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 04:21:01.364217 | orchestrator | 2026-02-20 04:21:01.364235 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-20 04:21:01.364253 | orchestrator | Friday 20 February 2026 04:20:42 +0000 (0:00:02.180) 0:00:38.150 ******* 2026-02-20 04:21:01.364272 | orchestrator | [WARNING]: Skipped 2026-02-20 04:21:01.364290 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-20 04:21:01.364311 | orchestrator | to this access issue: 2026-02-20 04:21:01.364331 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-20 04:21:01.364352 | orchestrator | directory 2026-02-20 04:21:01.364372 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 04:21:01.364392 | orchestrator | 2026-02-20 04:21:01.364413 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-20 04:21:01.364436 | orchestrator | Friday 20 February 2026 04:20:44 +0000 (0:00:01.697) 0:00:39.848 ******* 2026-02-20 04:21:01.364522 | orchestrator | [WARNING]: Skipped 2026-02-20 04:21:01.364548 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-20 04:21:01.364568 | orchestrator | to this access issue: 2026-02-20 04:21:01.364587 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-20 04:21:01.364609 | orchestrator | directory 2026-02-20 04:21:01.364630 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 04:21:01.364651 | orchestrator | 2026-02-20 04:21:01.364672 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-20 04:21:01.364692 | orchestrator | Friday 20 February 2026 04:20:45 +0000 (0:00:01.731) 0:00:41.580 ******* 2026-02-20 04:21:01.364713 | orchestrator | [WARNING]: Skipped 2026-02-20 04:21:01.364733 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-20 04:21:01.364753 | orchestrator | to this access issue: 2026-02-20 04:21:01.364774 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-20 04:21:01.364794 | orchestrator | directory 2026-02-20 04:21:01.364814 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 04:21:01.364836 | orchestrator | 2026-02-20 04:21:01.364857 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-20 04:21:01.364878 | orchestrator | Friday 20 February 2026 04:20:47 +0000 (0:00:01.813) 0:00:43.394 ******* 2026-02-20 04:21:01.364898 | orchestrator | changed: [testbed-manager] 2026-02-20 04:21:01.364918 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:21:01.364938 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:21:01.364991 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:21:01.365012 | orchestrator | changed: [testbed-node-4] 2026-02-20 04:21:01.365059 | orchestrator | changed: [testbed-node-3] 2026-02-20 04:21:01.365094 | orchestrator | changed: [testbed-node-5] 2026-02-20 04:21:01.365116 | orchestrator | 2026-02-20 04:21:01.365137 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-20 04:21:01.365159 | orchestrator | Friday 20 February 2026 04:20:51 +0000 (0:00:04.135) 0:00:47.529 ******* 2026-02-20 04:21:01.365181 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 04:21:01.365217 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 04:21:01.365239 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 04:21:01.365273 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 04:21:01.365292 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 04:21:01.365311 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 04:21:01.365330 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 04:21:01.365348 | orchestrator | 2026-02-20 04:21:01.365367 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-20 04:21:01.365386 | orchestrator | Friday 20 February 2026 04:20:54 +0000 (0:00:03.192) 0:00:50.722 ******* 2026-02-20 04:21:01.365405 | orchestrator | ok: [testbed-manager] 2026-02-20 04:21:01.365423 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:21:01.365440 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:21:01.365457 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:21:01.365532 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:21:01.365552 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:21:01.365569 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:21:01.365586 | orchestrator | 2026-02-20 04:21:01.365603 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-20 04:21:01.365621 | orchestrator | Friday 20 February 2026 04:20:58 +0000 (0:00:03.130) 0:00:53.852 ******* 2026-02-20 04:21:01.365669 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:01.365693 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:01.365723 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:01.365757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:01.365777 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:01.365798 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:01.365816 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:01.365835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:01.365866 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:10.102843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:10.102955 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:10.102993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:10.103010 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:10.103025 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:10.103038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:10.103052 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:10.103084 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:10.103099 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:10.103127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:10.103140 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:10.103151 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:10.103159 | orchestrator | 2026-02-20 04:21:10.103168 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-20 04:21:10.103177 | orchestrator | Friday 20 February 2026 04:21:01 +0000 (0:00:03.316) 0:00:57.168 ******* 2026-02-20 04:21:10.103185 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 04:21:10.103193 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 04:21:10.103200 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 04:21:10.103207 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 04:21:10.103215 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 04:21:10.103222 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 04:21:10.103229 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 04:21:10.103236 | orchestrator | 2026-02-20 04:21:10.103244 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-20 04:21:10.103251 | orchestrator | Friday 20 February 2026 04:21:04 +0000 (0:00:03.066) 0:01:00.235 ******* 2026-02-20 04:21:10.103258 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 04:21:10.103266 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 04:21:10.103273 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 04:21:10.103280 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 04:21:10.103288 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 04:21:10.103295 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 04:21:10.103303 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 04:21:10.103310 | orchestrator | 2026-02-20 04:21:10.103317 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-20 04:21:10.103331 | orchestrator | Friday 20 February 2026 04:21:07 +0000 (0:00:03.119) 0:01:03.354 ******* 2026-02-20 04:21:10.103355 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:12.034303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:12.034408 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:12.034426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:12.034438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:12.034449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:12.034461 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:12.034586 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:12.034639 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:12.034653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:12.034665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:12.034676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:12.034689 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:12.034703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:12.034723 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:12.034741 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:14.758459 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:14.758644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:14.758661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:14.758672 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:14.758683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:14.758694 | orchestrator | 2026-02-20 04:21:14.758706 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-20 04:21:14.758718 | orchestrator | Friday 20 February 2026 04:21:12 +0000 (0:00:04.488) 0:01:07.843 ******* 2026-02-20 04:21:14.758731 | orchestrator | changed: [testbed-manager] => { 2026-02-20 04:21:14.758763 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:21:14.758776 | orchestrator | } 2026-02-20 04:21:14.758787 | orchestrator | changed: [testbed-node-0] => { 2026-02-20 04:21:14.758797 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:21:14.758807 | orchestrator | } 2026-02-20 04:21:14.758817 | orchestrator | changed: [testbed-node-1] => { 2026-02-20 04:21:14.758827 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:21:14.758838 | orchestrator | } 2026-02-20 04:21:14.758849 | orchestrator | changed: [testbed-node-2] => { 2026-02-20 04:21:14.758860 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:21:14.758871 | orchestrator | } 2026-02-20 04:21:14.758881 | orchestrator | changed: [testbed-node-3] => { 2026-02-20 04:21:14.758891 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:21:14.758902 | orchestrator | } 2026-02-20 04:21:14.758912 | orchestrator | changed: [testbed-node-4] => { 2026-02-20 04:21:14.758922 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:21:14.758932 | orchestrator | } 2026-02-20 04:21:14.758943 | orchestrator | changed: [testbed-node-5] => { 2026-02-20 04:21:14.758953 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:21:14.758963 | orchestrator | } 2026-02-20 04:21:14.758973 | orchestrator | 2026-02-20 04:21:14.758984 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-20 04:21:14.758995 | orchestrator | Friday 20 February 2026 04:21:14 +0000 (0:00:02.043) 0:01:09.887 ******* 2026-02-20 04:21:14.759009 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:14.759051 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:14.759064 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:14.759075 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:21:14.759088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:14.759099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:14.759117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:14.759129 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:21:14.759140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:14.759153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:14.759165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:14.759178 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:21:14.759196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:23.439823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:23.439947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:23.439983 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:21:23.439996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:23.440007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:23.440016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:23.440025 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:21:23.440043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:23.440058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:23.440085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:23.440094 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:21:23.440123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:23.440140 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:23.440150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:23.440159 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:21:23.440168 | orchestrator | 2026-02-20 04:21:23.440178 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 04:21:23.440188 | orchestrator | Friday 20 February 2026 04:21:17 +0000 (0:00:02.963) 0:01:12.851 ******* 2026-02-20 04:21:23.440197 | orchestrator | 2026-02-20 04:21:23.440207 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 04:21:23.440216 | orchestrator | Friday 20 February 2026 04:21:17 +0000 (0:00:00.423) 0:01:13.274 ******* 2026-02-20 04:21:23.440224 | orchestrator | 2026-02-20 04:21:23.440233 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 04:21:23.440241 | orchestrator | Friday 20 February 2026 04:21:17 +0000 (0:00:00.471) 0:01:13.746 ******* 2026-02-20 04:21:23.440250 | orchestrator | 2026-02-20 04:21:23.440259 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 04:21:23.440267 | orchestrator | Friday 20 February 2026 04:21:18 +0000 (0:00:00.462) 0:01:14.209 ******* 2026-02-20 04:21:23.440276 | orchestrator | 2026-02-20 04:21:23.440284 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 04:21:23.440293 | orchestrator | Friday 20 February 2026 04:21:18 +0000 (0:00:00.436) 0:01:14.645 ******* 2026-02-20 04:21:23.440301 | orchestrator | 2026-02-20 04:21:23.440312 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 04:21:23.440322 | orchestrator | Friday 20 February 2026 04:21:19 +0000 (0:00:00.653) 0:01:15.299 ******* 2026-02-20 04:21:23.440332 | orchestrator | 2026-02-20 04:21:23.440342 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 04:21:23.440352 | orchestrator | Friday 20 February 2026 04:21:19 +0000 (0:00:00.446) 0:01:15.746 ******* 2026-02-20 04:21:23.440362 | orchestrator | 2026-02-20 04:21:23.440372 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-20 04:21:23.440382 | orchestrator | Friday 20 February 2026 04:21:20 +0000 (0:00:00.896) 0:01:16.642 ******* 2026-02-20 04:21:23.440412 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_wud2pfy5/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_wud2pfy5/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_wud2pfy5/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-20 04:21:26.694885 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_fwk8szpf/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_fwk8szpf/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_fwk8szpf/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-20 04:21:26.695034 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_pnrizbcu/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_pnrizbcu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_pnrizbcu/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-20 04:21:26.695087 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_wngojj_7/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_wngojj_7/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_wngojj_7/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-20 04:21:26.695121 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_waq2avto/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_waq2avto/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_waq2avto/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-20 04:21:27.194628 | orchestrator | 2026-02-20 04:21:27 | INFO  | Task e67e3051-1214-4a2b-9ae7-ab401bc8a282 (common) was prepared for execution. 2026-02-20 04:21:27.194748 | orchestrator | 2026-02-20 04:21:27 | INFO  | It takes a moment until task e67e3051-1214-4a2b-9ae7-ab401bc8a282 (common) has been started and output is visible here. 2026-02-20 04:21:33.086372 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_04ogtg9h/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_04ogtg9h/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_04ogtg9h/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-20 04:21:33.086614 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_bf3qldqs/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_bf3qldqs/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_bf3qldqs/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-20 04:21:33.086648 | orchestrator | 2026-02-20 04:21:33.086669 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:21:33.086730 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-20 04:21:33.086756 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-20 04:21:33.086796 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-20 04:21:33.086816 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-20 04:21:33.086835 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-20 04:21:33.086852 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-20 04:21:33.086869 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-20 04:21:33.086886 | orchestrator | 2026-02-20 04:21:33.086905 | orchestrator | 2026-02-20 04:21:33.086924 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 04:21:33.086944 | orchestrator | Friday 20 February 2026 04:21:26 +0000 (0:00:05.869) 0:01:22.512 ******* 2026-02-20 04:21:33.086961 | orchestrator | =============================================================================== 2026-02-20 04:21:33.086980 | orchestrator | common : Restart fluentd container -------------------------------------- 5.87s 2026-02-20 04:21:33.086999 | orchestrator | common : Copying over config.json files for services -------------------- 4.76s 2026-02-20 04:21:33.087018 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.59s 2026-02-20 04:21:33.087037 | orchestrator | service-check-containers : common | Check containers -------------------- 4.49s 2026-02-20 04:21:33.087056 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.14s 2026-02-20 04:21:33.087076 | orchestrator | common : Flush handlers ------------------------------------------------- 3.79s 2026-02-20 04:21:33.087107 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.54s 2026-02-20 04:21:33.087128 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.50s 2026-02-20 04:21:33.087149 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.33s 2026-02-20 04:21:33.087171 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.32s 2026-02-20 04:21:33.087190 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.19s 2026-02-20 04:21:33.087208 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.13s 2026-02-20 04:21:33.087222 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.12s 2026-02-20 04:21:33.087233 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.07s 2026-02-20 04:21:33.087244 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.96s 2026-02-20 04:21:33.087255 | orchestrator | common : include_tasks -------------------------------------------------- 2.94s 2026-02-20 04:21:33.087266 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.78s 2026-02-20 04:21:33.087276 | orchestrator | common : include_tasks -------------------------------------------------- 2.58s 2026-02-20 04:21:33.087287 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.18s 2026-02-20 04:21:33.087298 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 2.07s 2026-02-20 04:21:33.087309 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-20 04:21:33.087321 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-20 04:21:33.087342 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-20 04:21:33.087364 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-20 04:21:33.087386 | orchestrator | 2026-02-20 04:21:33.087416 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-20 04:21:42.180080 | orchestrator | 2026-02-20 04:21:42.180197 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-20 04:21:42.180214 | orchestrator | Friday 20 February 2026 04:21:33 +0000 (0:00:01.571) 0:00:01.571 ******* 2026-02-20 04:21:42.180228 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 04:21:42.180240 | orchestrator | 2026-02-20 04:21:42.180252 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-20 04:21:42.180263 | orchestrator | Friday 20 February 2026 04:21:35 +0000 (0:00:02.178) 0:00:03.749 ******* 2026-02-20 04:21:42.180275 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 04:21:42.180286 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 04:21:42.180298 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 04:21:42.180308 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 04:21:42.180319 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 04:21:42.180346 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 04:21:42.180359 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 04:21:42.180370 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 04:21:42.180381 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-20 04:21:42.180392 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 04:21:42.180403 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 04:21:42.180415 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 04:21:42.180426 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 04:21:42.180437 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 04:21:42.180448 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-20 04:21:42.180459 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 04:21:42.180470 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 04:21:42.180573 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 04:21:42.180584 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 04:21:42.180595 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 04:21:42.180606 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-20 04:21:42.180617 | orchestrator | 2026-02-20 04:21:42.180628 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-20 04:21:42.180639 | orchestrator | Friday 20 February 2026 04:21:37 +0000 (0:00:02.268) 0:00:06.017 ******* 2026-02-20 04:21:42.180650 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 04:21:42.180663 | orchestrator | 2026-02-20 04:21:42.180674 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-20 04:21:42.180709 | orchestrator | Friday 20 February 2026 04:21:39 +0000 (0:00:02.025) 0:00:08.043 ******* 2026-02-20 04:21:42.180724 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:42.180740 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:42.180772 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:42.180785 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:42.180803 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:42.180815 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:42.180826 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:42.180838 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:42.180858 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:42.180878 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:43.899548 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:43.899691 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:43.899723 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:43.899738 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:43.899776 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:43.899788 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:43.899801 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:43.899832 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:43.899844 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:43.899861 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:43.899872 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:43.899884 | orchestrator | 2026-02-20 04:21:43.899897 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-20 04:21:43.899909 | orchestrator | Friday 20 February 2026 04:21:43 +0000 (0:00:03.624) 0:00:11.668 ******* 2026-02-20 04:21:43.899922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:43.899952 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:43.899967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:43.899979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:43.900001 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:44.680714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:44.680825 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:21:44.680844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:44.680880 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:44.680893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:44.680905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:44.680917 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:21:44.680929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:44.680941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:44.680952 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:21:44.680983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:44.680995 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:21:44.681007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:44.681026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:44.681038 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:21:44.681049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:44.681100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:44.681113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:44.681125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:44.681139 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:21:44.681160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:45.799898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:45.800022 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:21:45.800040 | orchestrator | 2026-02-20 04:21:45.800053 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-20 04:21:45.800065 | orchestrator | Friday 20 February 2026 04:21:44 +0000 (0:00:01.488) 0:00:13.156 ******* 2026-02-20 04:21:45.800078 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:45.800092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:45.800104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:45.800138 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:45.800150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:45.800162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:45.800192 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:45.800213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:45.800226 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:21:45.800238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:45.800250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:45.800262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:45.800273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:45.800285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:45.800310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:54.269724 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:21:54.269812 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:21:54.269819 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:21:54.269826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:54.269832 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:21:54.269837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:54.269843 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:54.269848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:21:54.269855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:54.269862 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:21:54.269868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:54.269874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:21:54.269902 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:21:54.269908 | orchestrator | 2026-02-20 04:21:54.269916 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-20 04:21:54.269935 | orchestrator | Friday 20 February 2026 04:21:46 +0000 (0:00:02.195) 0:00:15.352 ******* 2026-02-20 04:21:54.269955 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:21:54.269963 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:21:54.269968 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:21:54.269972 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:21:54.269976 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:21:54.269979 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:21:54.269983 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:21:54.269987 | orchestrator | 2026-02-20 04:21:54.269991 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-20 04:21:54.269995 | orchestrator | Friday 20 February 2026 04:21:47 +0000 (0:00:01.064) 0:00:16.417 ******* 2026-02-20 04:21:54.269999 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:21:54.270003 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:21:54.270006 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:21:54.270010 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:21:54.270051 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:21:54.270055 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:21:54.270059 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:21:54.270063 | orchestrator | 2026-02-20 04:21:54.270067 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-20 04:21:54.270071 | orchestrator | Friday 20 February 2026 04:21:48 +0000 (0:00:00.971) 0:00:17.388 ******* 2026-02-20 04:21:54.270074 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:21:54.270078 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:21:54.270082 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:21:54.270086 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:21:54.270089 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:21:54.270093 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:21:54.270097 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:21:54.270101 | orchestrator | 2026-02-20 04:21:54.270105 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-20 04:21:54.270109 | orchestrator | Friday 20 February 2026 04:21:49 +0000 (0:00:00.726) 0:00:18.115 ******* 2026-02-20 04:21:54.270113 | orchestrator | ok: [testbed-manager] 2026-02-20 04:21:54.270118 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:21:54.270122 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:21:54.270125 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:21:54.270129 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:21:54.270133 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:21:54.270137 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:21:54.270141 | orchestrator | 2026-02-20 04:21:54.270144 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-20 04:21:54.270148 | orchestrator | Friday 20 February 2026 04:21:51 +0000 (0:00:01.883) 0:00:19.999 ******* 2026-02-20 04:21:54.270153 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:54.270163 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:54.270168 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:54.270172 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:54.270185 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:55.261816 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:55.261911 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:55.261925 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:55.261955 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:21:55.261965 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:55.261975 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:55.262001 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:55.262109 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:55.262130 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:55.262145 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:55.262173 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:55.262190 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:55.262204 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:55.262219 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:55.262234 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:21:55.262261 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:08.842347 | orchestrator | 2026-02-20 04:22:08.842544 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-20 04:22:08.842568 | orchestrator | Friday 20 February 2026 04:21:55 +0000 (0:00:03.742) 0:00:23.741 ******* 2026-02-20 04:22:08.842581 | orchestrator | [WARNING]: Skipped 2026-02-20 04:22:08.842594 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-20 04:22:08.842610 | orchestrator | to this access issue: 2026-02-20 04:22:08.842629 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-20 04:22:08.842647 | orchestrator | directory 2026-02-20 04:22:08.842667 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 04:22:08.842689 | orchestrator | 2026-02-20 04:22:08.842708 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-20 04:22:08.842725 | orchestrator | Friday 20 February 2026 04:21:56 +0000 (0:00:01.281) 0:00:25.022 ******* 2026-02-20 04:22:08.842764 | orchestrator | [WARNING]: Skipped 2026-02-20 04:22:08.842776 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-20 04:22:08.842786 | orchestrator | to this access issue: 2026-02-20 04:22:08.842798 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-20 04:22:08.842809 | orchestrator | directory 2026-02-20 04:22:08.842820 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 04:22:08.842831 | orchestrator | 2026-02-20 04:22:08.842842 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-20 04:22:08.842853 | orchestrator | Friday 20 February 2026 04:21:57 +0000 (0:00:00.907) 0:00:25.930 ******* 2026-02-20 04:22:08.842866 | orchestrator | [WARNING]: Skipped 2026-02-20 04:22:08.842879 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-20 04:22:08.842893 | orchestrator | to this access issue: 2026-02-20 04:22:08.842906 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-20 04:22:08.842919 | orchestrator | directory 2026-02-20 04:22:08.842947 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 04:22:08.842959 | orchestrator | 2026-02-20 04:22:08.842970 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-20 04:22:08.842995 | orchestrator | Friday 20 February 2026 04:21:58 +0000 (0:00:00.920) 0:00:26.851 ******* 2026-02-20 04:22:08.843006 | orchestrator | [WARNING]: Skipped 2026-02-20 04:22:08.843017 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-20 04:22:08.843028 | orchestrator | to this access issue: 2026-02-20 04:22:08.843039 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-20 04:22:08.843050 | orchestrator | directory 2026-02-20 04:22:08.843061 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-20 04:22:08.843072 | orchestrator | 2026-02-20 04:22:08.843083 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-20 04:22:08.843094 | orchestrator | Friday 20 February 2026 04:21:59 +0000 (0:00:00.872) 0:00:27.723 ******* 2026-02-20 04:22:08.843105 | orchestrator | ok: [testbed-manager] 2026-02-20 04:22:08.843116 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:22:08.843128 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:22:08.843139 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:22:08.843150 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:22:08.843161 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:22:08.843171 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:22:08.843182 | orchestrator | 2026-02-20 04:22:08.843194 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-20 04:22:08.843207 | orchestrator | Friday 20 February 2026 04:22:02 +0000 (0:00:03.324) 0:00:31.047 ******* 2026-02-20 04:22:08.843226 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 04:22:08.843246 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 04:22:08.843267 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 04:22:08.843287 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 04:22:08.843299 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 04:22:08.843310 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 04:22:08.843321 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-20 04:22:08.843332 | orchestrator | 2026-02-20 04:22:08.843343 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-20 04:22:08.843354 | orchestrator | Friday 20 February 2026 04:22:05 +0000 (0:00:02.501) 0:00:33.549 ******* 2026-02-20 04:22:08.843365 | orchestrator | ok: [testbed-manager] 2026-02-20 04:22:08.843385 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:22:08.843397 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:22:08.843408 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:22:08.843425 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:22:08.843436 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:22:08.843447 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:22:08.843458 | orchestrator | 2026-02-20 04:22:08.843469 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-20 04:22:08.843517 | orchestrator | Friday 20 February 2026 04:22:06 +0000 (0:00:01.835) 0:00:35.384 ******* 2026-02-20 04:22:08.843559 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:22:08.843575 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:08.843588 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:22:08.843600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:08.843612 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:08.843625 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:22:08.843645 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:08.843663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:08.843685 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:22:15.609042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:15.609151 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:22:15.609167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:15.609185 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:15.609232 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:22:15.609270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:15.609290 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:22:15.609331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:15.609358 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:15.609380 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:15.609391 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:15.609401 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:15.609420 | orchestrator | 2026-02-20 04:22:15.609431 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-20 04:22:15.609443 | orchestrator | Friday 20 February 2026 04:22:08 +0000 (0:00:01.933) 0:00:37.318 ******* 2026-02-20 04:22:15.609453 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 04:22:15.609463 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 04:22:15.609608 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 04:22:15.609629 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 04:22:15.609645 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 04:22:15.609661 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 04:22:15.609688 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-20 04:22:15.609705 | orchestrator | 2026-02-20 04:22:15.609723 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-20 04:22:15.609741 | orchestrator | Friday 20 February 2026 04:22:10 +0000 (0:00:02.089) 0:00:39.408 ******* 2026-02-20 04:22:15.609758 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 04:22:15.609776 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 04:22:15.609794 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 04:22:15.609811 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 04:22:15.609828 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 04:22:15.609845 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 04:22:15.609864 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-20 04:22:15.609881 | orchestrator | 2026-02-20 04:22:15.609898 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-20 04:22:15.609910 | orchestrator | Friday 20 February 2026 04:22:13 +0000 (0:00:02.181) 0:00:41.589 ******* 2026-02-20 04:22:15.609935 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:22:16.734826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:22:16.734933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:22:16.734982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:22:16.734999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:22:16.735028 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:22:16.735043 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-20 04:22:16.735057 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:16.735093 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:16.735108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:16.735132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:16.735147 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:16.735169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:16.735184 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:16.735215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:16.735257 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:18.416261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:18.416367 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:18.416377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:18.416384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:18.416401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:22:18.416408 | orchestrator | 2026-02-20 04:22:18.416416 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-20 04:22:18.416424 | orchestrator | Friday 20 February 2026 04:22:16 +0000 (0:00:03.624) 0:00:45.214 ******* 2026-02-20 04:22:18.416431 | orchestrator | changed: [testbed-manager] => { 2026-02-20 04:22:18.416439 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:22:18.416445 | orchestrator | } 2026-02-20 04:22:18.416452 | orchestrator | changed: [testbed-node-0] => { 2026-02-20 04:22:18.416458 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:22:18.416465 | orchestrator | } 2026-02-20 04:22:18.416515 | orchestrator | changed: [testbed-node-1] => { 2026-02-20 04:22:18.416523 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:22:18.416529 | orchestrator | } 2026-02-20 04:22:18.416536 | orchestrator | changed: [testbed-node-2] => { 2026-02-20 04:22:18.416542 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:22:18.416548 | orchestrator | } 2026-02-20 04:22:18.416554 | orchestrator | changed: [testbed-node-3] => { 2026-02-20 04:22:18.416560 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:22:18.416567 | orchestrator | } 2026-02-20 04:22:18.416573 | orchestrator | changed: [testbed-node-4] => { 2026-02-20 04:22:18.416579 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:22:18.416586 | orchestrator | } 2026-02-20 04:22:18.416597 | orchestrator | changed: [testbed-node-5] => { 2026-02-20 04:22:18.416607 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:22:18.416618 | orchestrator | } 2026-02-20 04:22:18.416628 | orchestrator | 2026-02-20 04:22:18.416638 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-20 04:22:18.416650 | orchestrator | Friday 20 February 2026 04:22:17 +0000 (0:00:01.073) 0:00:46.288 ******* 2026-02-20 04:22:18.416673 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:22:18.416700 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:18.416708 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:18.416715 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:22:18.416721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:22:18.416728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:18.416735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:18.416742 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:22:18.416748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:22:18.416766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:18.416773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:18.416784 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:22:20.744896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:22:20.745012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:20.745032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:20.745047 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:22:20.745096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:22:20.745118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:20.745165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:20.745188 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-20 04:22:20.745210 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-20 04:22:20.745250 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:22:20.745287 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:22:20.745301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:20.745312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:20.745324 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:22:20.745336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-20 04:22:20.745354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:20.745366 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:22:20.745388 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:22:20.745401 | orchestrator | 2026-02-20 04:22:20.745415 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 04:22:20.745428 | orchestrator | Friday 20 February 2026 04:22:19 +0000 (0:00:02.111) 0:00:48.399 ******* 2026-02-20 04:22:20.745441 | orchestrator | 2026-02-20 04:22:20.745457 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 04:22:20.745541 | orchestrator | Friday 20 February 2026 04:22:20 +0000 (0:00:00.096) 0:00:48.496 ******* 2026-02-20 04:22:20.745562 | orchestrator | 2026-02-20 04:22:20.745582 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 04:22:20.745601 | orchestrator | Friday 20 February 2026 04:22:20 +0000 (0:00:00.082) 0:00:48.578 ******* 2026-02-20 04:22:20.745620 | orchestrator | 2026-02-20 04:22:20.745635 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 04:22:20.745646 | orchestrator | Friday 20 February 2026 04:22:20 +0000 (0:00:00.071) 0:00:48.650 ******* 2026-02-20 04:22:20.745657 | orchestrator | 2026-02-20 04:22:20.745668 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 04:22:20.745679 | orchestrator | Friday 20 February 2026 04:22:20 +0000 (0:00:00.070) 0:00:48.720 ******* 2026-02-20 04:22:20.745690 | orchestrator | 2026-02-20 04:22:20.745701 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 04:22:20.745711 | orchestrator | Friday 20 February 2026 04:22:20 +0000 (0:00:00.312) 0:00:49.032 ******* 2026-02-20 04:22:20.745722 | orchestrator | 2026-02-20 04:22:20.745733 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-20 04:22:20.745744 | orchestrator | Friday 20 February 2026 04:22:20 +0000 (0:00:00.071) 0:00:49.104 ******* 2026-02-20 04:22:20.745755 | orchestrator | 2026-02-20 04:22:20.745766 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-20 04:22:20.745786 | orchestrator | Friday 20 February 2026 04:22:20 +0000 (0:00:00.102) 0:00:49.206 ******* 2026-02-20 04:23:46.157339 | orchestrator | changed: [testbed-node-4] 2026-02-20 04:23:46.157509 | orchestrator | changed: [testbed-manager] 2026-02-20 04:23:46.157536 | orchestrator | changed: [testbed-node-3] 2026-02-20 04:23:46.157556 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:23:46.157575 | orchestrator | changed: [testbed-node-5] 2026-02-20 04:23:46.157594 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:23:46.157615 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:23:46.157634 | orchestrator | 2026-02-20 04:23:46.157656 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-20 04:23:46.157677 | orchestrator | Friday 20 February 2026 04:22:56 +0000 (0:00:35.551) 0:01:24.757 ******* 2026-02-20 04:23:46.157696 | orchestrator | changed: [testbed-manager] 2026-02-20 04:23:46.157713 | orchestrator | changed: [testbed-node-4] 2026-02-20 04:23:46.157731 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:23:46.157750 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:23:46.157770 | orchestrator | changed: [testbed-node-3] 2026-02-20 04:23:46.157789 | orchestrator | changed: [testbed-node-5] 2026-02-20 04:23:46.157808 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:23:46.157827 | orchestrator | 2026-02-20 04:23:46.157847 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-20 04:23:46.157868 | orchestrator | Friday 20 February 2026 04:23:31 +0000 (0:00:35.667) 0:02:00.425 ******* 2026-02-20 04:23:46.157889 | orchestrator | ok: [testbed-manager] 2026-02-20 04:23:46.157909 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:23:46.157928 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:23:46.157980 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:23:46.158002 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:23:46.158065 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:23:46.158090 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:23:46.158110 | orchestrator | 2026-02-20 04:23:46.158132 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-20 04:23:46.158153 | orchestrator | Friday 20 February 2026 04:23:33 +0000 (0:00:01.984) 0:02:02.410 ******* 2026-02-20 04:23:46.158173 | orchestrator | changed: [testbed-manager] 2026-02-20 04:23:46.158192 | orchestrator | changed: [testbed-node-4] 2026-02-20 04:23:46.158212 | orchestrator | changed: [testbed-node-3] 2026-02-20 04:23:46.158230 | orchestrator | changed: [testbed-node-5] 2026-02-20 04:23:46.158250 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:23:46.158273 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:23:46.158292 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:23:46.158312 | orchestrator | 2026-02-20 04:23:46.158331 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:23:46.158351 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 04:23:46.158372 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 04:23:46.158392 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 04:23:46.158430 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 04:23:46.158451 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 04:23:46.158663 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 04:23:46.158701 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 04:23:46.158720 | orchestrator | 2026-02-20 04:23:46.158739 | orchestrator | 2026-02-20 04:23:46.158759 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 04:23:46.158778 | orchestrator | Friday 20 February 2026 04:23:45 +0000 (0:00:11.742) 0:02:14.152 ******* 2026-02-20 04:23:46.158796 | orchestrator | =============================================================================== 2026-02-20 04:23:46.158816 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.67s 2026-02-20 04:23:46.158835 | orchestrator | common : Restart fluentd container ------------------------------------- 35.55s 2026-02-20 04:23:46.158854 | orchestrator | common : Restart cron container ---------------------------------------- 11.74s 2026-02-20 04:23:46.158872 | orchestrator | common : Copying over config.json files for services -------------------- 3.74s 2026-02-20 04:23:46.158889 | orchestrator | service-check-containers : common | Check containers -------------------- 3.62s 2026-02-20 04:23:46.158914 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.62s 2026-02-20 04:23:46.158938 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.32s 2026-02-20 04:23:46.158954 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.50s 2026-02-20 04:23:46.158970 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.27s 2026-02-20 04:23:46.158987 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.20s 2026-02-20 04:23:46.159001 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.18s 2026-02-20 04:23:46.159016 | orchestrator | common : include_tasks -------------------------------------------------- 2.18s 2026-02-20 04:23:46.159048 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.11s 2026-02-20 04:23:46.159064 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.09s 2026-02-20 04:23:46.159106 | orchestrator | common : include_tasks -------------------------------------------------- 2.03s 2026-02-20 04:23:46.159124 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.98s 2026-02-20 04:23:46.159140 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.93s 2026-02-20 04:23:46.159155 | orchestrator | common : Copying over kolla.target -------------------------------------- 1.88s 2026-02-20 04:23:46.159192 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.84s 2026-02-20 04:23:46.159221 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.49s 2026-02-20 04:23:46.425143 | orchestrator | + osism apply -a upgrade loadbalancer 2026-02-20 04:23:48.433388 | orchestrator | 2026-02-20 04:23:48 | INFO  | Task c4250806-c3dc-459d-a34c-66b11931ea9e (loadbalancer) was prepared for execution. 2026-02-20 04:23:48.433580 | orchestrator | 2026-02-20 04:23:48 | INFO  | It takes a moment until task c4250806-c3dc-459d-a34c-66b11931ea9e (loadbalancer) has been started and output is visible here. 2026-02-20 04:24:23.475953 | orchestrator | 2026-02-20 04:24:23.476084 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 04:24:23.476103 | orchestrator | 2026-02-20 04:24:23.476115 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 04:24:23.476127 | orchestrator | Friday 20 February 2026 04:23:54 +0000 (0:00:01.456) 0:00:01.456 ******* 2026-02-20 04:24:23.476138 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:24:23.476151 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:24:23.476162 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:24:23.476173 | orchestrator | 2026-02-20 04:24:23.476192 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 04:24:23.476211 | orchestrator | Friday 20 February 2026 04:23:56 +0000 (0:00:01.885) 0:00:03.342 ******* 2026-02-20 04:24:23.476232 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-20 04:24:23.476251 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-20 04:24:23.476268 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-20 04:24:23.476284 | orchestrator | 2026-02-20 04:24:23.476303 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-20 04:24:23.476321 | orchestrator | 2026-02-20 04:24:23.476340 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-20 04:24:23.476358 | orchestrator | Friday 20 February 2026 04:23:58 +0000 (0:00:01.855) 0:00:05.197 ******* 2026-02-20 04:24:23.476378 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:24:23.476398 | orchestrator | 2026-02-20 04:24:23.476418 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-02-20 04:24:23.476435 | orchestrator | Friday 20 February 2026 04:24:01 +0000 (0:00:02.790) 0:00:07.988 ******* 2026-02-20 04:24:23.476486 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:24:23.476500 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:24:23.476529 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:24:23.476543 | orchestrator | 2026-02-20 04:24:23.476555 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-02-20 04:24:23.476567 | orchestrator | Friday 20 February 2026 04:24:03 +0000 (0:00:02.192) 0:00:10.180 ******* 2026-02-20 04:24:23.476579 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:24:23.476592 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:24:23.476605 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:24:23.476618 | orchestrator | 2026-02-20 04:24:23.476630 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-20 04:24:23.476643 | orchestrator | Friday 20 February 2026 04:24:05 +0000 (0:00:02.313) 0:00:12.494 ******* 2026-02-20 04:24:23.476692 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:24:23.476713 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:24:23.476732 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:24:23.476751 | orchestrator | 2026-02-20 04:24:23.476769 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-20 04:24:23.476788 | orchestrator | Friday 20 February 2026 04:24:07 +0000 (0:00:01.916) 0:00:14.410 ******* 2026-02-20 04:24:23.476802 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:24:23.476813 | orchestrator | 2026-02-20 04:24:23.476824 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-20 04:24:23.476835 | orchestrator | Friday 20 February 2026 04:24:09 +0000 (0:00:01.893) 0:00:16.304 ******* 2026-02-20 04:24:23.476846 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:24:23.476857 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:24:23.476868 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:24:23.476879 | orchestrator | 2026-02-20 04:24:23.476889 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-20 04:24:23.476901 | orchestrator | Friday 20 February 2026 04:24:11 +0000 (0:00:01.883) 0:00:18.187 ******* 2026-02-20 04:24:23.476921 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-20 04:24:23.476939 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-20 04:24:23.476956 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-20 04:24:23.476974 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-20 04:24:23.476992 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-20 04:24:23.477011 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-20 04:24:23.477029 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-20 04:24:23.477049 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-20 04:24:23.477068 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-20 04:24:23.477086 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-20 04:24:23.477104 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-20 04:24:23.477123 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-20 04:24:23.477143 | orchestrator | 2026-02-20 04:24:23.477161 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-20 04:24:23.477180 | orchestrator | Friday 20 February 2026 04:24:14 +0000 (0:00:03.385) 0:00:21.573 ******* 2026-02-20 04:24:23.477199 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-20 04:24:23.477218 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-20 04:24:23.477237 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-20 04:24:23.477257 | orchestrator | 2026-02-20 04:24:23.477276 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-20 04:24:23.477319 | orchestrator | Friday 20 February 2026 04:24:16 +0000 (0:00:01.946) 0:00:23.520 ******* 2026-02-20 04:24:23.477339 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-20 04:24:23.477356 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-20 04:24:23.477373 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-20 04:24:23.477390 | orchestrator | 2026-02-20 04:24:23.477406 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-20 04:24:23.477422 | orchestrator | Friday 20 February 2026 04:24:18 +0000 (0:00:02.281) 0:00:25.801 ******* 2026-02-20 04:24:23.477441 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-20 04:24:23.477485 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:24:23.477531 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-20 04:24:23.477551 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:24:23.477570 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-20 04:24:23.477589 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:24:23.477608 | orchestrator | 2026-02-20 04:24:23.477626 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-20 04:24:23.477645 | orchestrator | Friday 20 February 2026 04:24:20 +0000 (0:00:01.850) 0:00:27.652 ******* 2026-02-20 04:24:23.477679 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-20 04:24:23.477705 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-20 04:24:23.477718 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-20 04:24:23.477729 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:24:23.477741 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:24:23.477764 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:24:34.322313 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 04:24:34.322439 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 04:24:34.322505 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 04:24:34.322519 | orchestrator | 2026-02-20 04:24:34.322532 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-20 04:24:34.322545 | orchestrator | Friday 20 February 2026 04:24:23 +0000 (0:00:02.752) 0:00:30.404 ******* 2026-02-20 04:24:34.322555 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:24:34.322566 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:24:34.322576 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:24:34.322586 | orchestrator | 2026-02-20 04:24:34.322596 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-20 04:24:34.322607 | orchestrator | Friday 20 February 2026 04:24:25 +0000 (0:00:01.978) 0:00:32.383 ******* 2026-02-20 04:24:34.322624 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-02-20 04:24:34.322642 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-02-20 04:24:34.322658 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-02-20 04:24:34.322673 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-02-20 04:24:34.322688 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-02-20 04:24:34.322704 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-02-20 04:24:34.322719 | orchestrator | 2026-02-20 04:24:34.322735 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-20 04:24:34.322751 | orchestrator | Friday 20 February 2026 04:24:28 +0000 (0:00:02.780) 0:00:35.163 ******* 2026-02-20 04:24:34.322767 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:24:34.322783 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:24:34.322800 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:24:34.322816 | orchestrator | 2026-02-20 04:24:34.322832 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-20 04:24:34.322850 | orchestrator | Friday 20 February 2026 04:24:30 +0000 (0:00:02.242) 0:00:37.406 ******* 2026-02-20 04:24:34.322867 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:24:34.322911 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:24:34.322923 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:24:34.322935 | orchestrator | 2026-02-20 04:24:34.322946 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-20 04:24:34.322958 | orchestrator | Friday 20 February 2026 04:24:32 +0000 (0:00:02.237) 0:00:39.643 ******* 2026-02-20 04:24:34.322971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-20 04:24:34.323005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 04:24:34.323020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:24:34.323044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cffb7934335089efbcf77e87dfa724dd9d434670', '__omit_place_holder__cffb7934335089efbcf77e87dfa724dd9d434670'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-20 04:24:34.323060 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:24:34.323074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-20 04:24:34.323086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 04:24:34.323106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:24:34.323118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cffb7934335089efbcf77e87dfa724dd9d434670', '__omit_place_holder__cffb7934335089efbcf77e87dfa724dd9d434670'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-20 04:24:34.323130 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:24:34.323150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-20 04:24:38.517183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 04:24:38.517293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:24:38.517310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cffb7934335089efbcf77e87dfa724dd9d434670', '__omit_place_holder__cffb7934335089efbcf77e87dfa724dd9d434670'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-20 04:24:38.517349 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:24:38.517364 | orchestrator | 2026-02-20 04:24:38.517377 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-20 04:24:38.517389 | orchestrator | Friday 20 February 2026 04:24:34 +0000 (0:00:01.601) 0:00:41.245 ******* 2026-02-20 04:24:38.517402 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-20 04:24:38.517414 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-20 04:24:38.517426 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-20 04:24:38.517502 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:24:38.517524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:24:38.517550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cffb7934335089efbcf77e87dfa724dd9d434670', '__omit_place_holder__cffb7934335089efbcf77e87dfa724dd9d434670'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-20 04:24:38.517571 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:24:38.517583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:24:38.517595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cffb7934335089efbcf77e87dfa724dd9d434670', '__omit_place_holder__cffb7934335089efbcf77e87dfa724dd9d434670'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-20 04:24:38.517620 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:24:52.131608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:24:52.131720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cffb7934335089efbcf77e87dfa724dd9d434670', '__omit_place_holder__cffb7934335089efbcf77e87dfa724dd9d434670'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-20 04:24:52.131761 | orchestrator | 2026-02-20 04:24:52.131774 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-20 04:24:52.131786 | orchestrator | Friday 20 February 2026 04:24:38 +0000 (0:00:04.199) 0:00:45.445 ******* 2026-02-20 04:24:52.131797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-20 04:24:52.131809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-20 04:24:52.131820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-20 04:24:52.131844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:24:52.131872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:24:52.131890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:24:52.131900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 04:24:52.131910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 04:24:52.131920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 04:24:52.131930 | orchestrator | 2026-02-20 04:24:52.131940 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-20 04:24:52.131951 | orchestrator | Friday 20 February 2026 04:24:43 +0000 (0:00:04.776) 0:00:50.222 ******* 2026-02-20 04:24:52.131961 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-20 04:24:52.131972 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-20 04:24:52.131982 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-20 04:24:52.131992 | orchestrator | 2026-02-20 04:24:52.132001 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-20 04:24:52.132011 | orchestrator | Friday 20 February 2026 04:24:46 +0000 (0:00:02.779) 0:00:53.002 ******* 2026-02-20 04:24:52.132021 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-20 04:24:52.132030 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-20 04:24:52.132040 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-20 04:24:52.132050 | orchestrator | 2026-02-20 04:24:52.132064 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-20 04:24:52.132075 | orchestrator | Friday 20 February 2026 04:24:50 +0000 (0:00:04.212) 0:00:57.214 ******* 2026-02-20 04:24:52.132085 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:24:52.132102 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:24:52.132119 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:25:12.757855 | orchestrator | 2026-02-20 04:25:12.757978 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-20 04:25:12.757994 | orchestrator | Friday 20 February 2026 04:24:52 +0000 (0:00:01.839) 0:00:59.054 ******* 2026-02-20 04:25:12.758005 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-20 04:25:12.758014 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-20 04:25:12.758073 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-20 04:25:12.758081 | orchestrator | 2026-02-20 04:25:12.758090 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-20 04:25:12.758098 | orchestrator | Friday 20 February 2026 04:24:55 +0000 (0:00:03.062) 0:01:02.116 ******* 2026-02-20 04:25:12.758106 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-20 04:25:12.758116 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-20 04:25:12.758131 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-20 04:25:12.758139 | orchestrator | 2026-02-20 04:25:12.758147 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-20 04:25:12.758155 | orchestrator | Friday 20 February 2026 04:24:58 +0000 (0:00:02.832) 0:01:04.949 ******* 2026-02-20 04:25:12.758163 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:25:12.758172 | orchestrator | 2026-02-20 04:25:12.758180 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-20 04:25:12.758188 | orchestrator | Friday 20 February 2026 04:24:59 +0000 (0:00:01.829) 0:01:06.778 ******* 2026-02-20 04:25:12.758196 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-02-20 04:25:12.758205 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-02-20 04:25:12.758213 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-02-20 04:25:12.758221 | orchestrator | 2026-02-20 04:25:12.758229 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-20 04:25:12.758237 | orchestrator | Friday 20 February 2026 04:25:02 +0000 (0:00:02.735) 0:01:09.513 ******* 2026-02-20 04:25:12.758245 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-20 04:25:12.758253 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-20 04:25:12.758261 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-20 04:25:12.758270 | orchestrator | 2026-02-20 04:25:12.758278 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-02-20 04:25:12.758286 | orchestrator | Friday 20 February 2026 04:25:05 +0000 (0:00:02.644) 0:01:12.158 ******* 2026-02-20 04:25:12.758294 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:25:12.758303 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:25:12.758311 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:25:12.758319 | orchestrator | 2026-02-20 04:25:12.758327 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-02-20 04:25:12.758335 | orchestrator | Friday 20 February 2026 04:25:06 +0000 (0:00:01.375) 0:01:13.534 ******* 2026-02-20 04:25:12.758343 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:25:12.758351 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:25:12.758359 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:25:12.758367 | orchestrator | 2026-02-20 04:25:12.758374 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-20 04:25:12.758383 | orchestrator | Friday 20 February 2026 04:25:08 +0000 (0:00:01.842) 0:01:15.377 ******* 2026-02-20 04:25:12.758408 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-20 04:25:12.758426 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-20 04:25:12.758473 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-20 04:25:12.758484 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:25:12.758492 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:25:12.758500 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:25:12.758515 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 04:25:12.758525 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 04:25:12.758543 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 04:25:16.614219 | orchestrator | 2026-02-20 04:25:16.614318 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-20 04:25:16.614334 | orchestrator | Friday 20 February 2026 04:25:12 +0000 (0:00:04.300) 0:01:19.677 ******* 2026-02-20 04:25:16.614350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-20 04:25:16.614365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 04:25:16.614378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:25:16.614389 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:25:16.614403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-20 04:25:16.614503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 04:25:16.614560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:25:16.614581 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:25:16.614625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-20 04:25:16.614643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 04:25:16.614679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:25:16.614699 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:25:16.614717 | orchestrator | 2026-02-20 04:25:16.614734 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-20 04:25:16.614755 | orchestrator | Friday 20 February 2026 04:25:14 +0000 (0:00:01.709) 0:01:21.387 ******* 2026-02-20 04:25:16.614776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-20 04:25:16.614811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 04:25:16.614832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:25:16.614852 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:25:16.614894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-20 04:25:28.202941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 04:25:28.203057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:25:28.203081 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:25:28.203111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-20 04:25:28.203165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 04:25:28.203183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:25:28.203198 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:25:28.203215 | orchestrator | 2026-02-20 04:25:28.203235 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-20 04:25:28.203255 | orchestrator | Friday 20 February 2026 04:25:16 +0000 (0:00:02.153) 0:01:23.540 ******* 2026-02-20 04:25:28.203271 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-20 04:25:28.203290 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-20 04:25:28.203325 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-20 04:25:28.203344 | orchestrator | 2026-02-20 04:25:28.203355 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-20 04:25:28.203365 | orchestrator | Friday 20 February 2026 04:25:19 +0000 (0:00:02.433) 0:01:25.974 ******* 2026-02-20 04:25:28.203374 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-20 04:25:28.203384 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-20 04:25:28.203394 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-20 04:25:28.203404 | orchestrator | 2026-02-20 04:25:28.203434 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-20 04:25:28.203476 | orchestrator | Friday 20 February 2026 04:25:21 +0000 (0:00:02.482) 0:01:28.456 ******* 2026-02-20 04:25:28.203487 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-20 04:25:28.203497 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-20 04:25:28.203507 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-20 04:25:28.203517 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-20 04:25:28.203527 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:25:28.203537 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-20 04:25:28.203555 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:25:28.203565 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-20 04:25:28.203575 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:25:28.203585 | orchestrator | 2026-02-20 04:25:28.203595 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-20 04:25:28.203605 | orchestrator | Friday 20 February 2026 04:25:24 +0000 (0:00:02.494) 0:01:30.951 ******* 2026-02-20 04:25:28.203616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-20 04:25:28.203627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-20 04:25:28.203639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-20 04:25:28.203655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:25:28.203683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:25:32.235426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:25:32.235658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 04:25:32.235680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 04:25:32.235693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 04:25:32.235705 | orchestrator | 2026-02-20 04:25:32.235719 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-20 04:25:32.235732 | orchestrator | Friday 20 February 2026 04:25:28 +0000 (0:00:04.178) 0:01:35.130 ******* 2026-02-20 04:25:32.235744 | orchestrator | changed: [testbed-node-0] => { 2026-02-20 04:25:32.235757 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:25:32.235768 | orchestrator | } 2026-02-20 04:25:32.235779 | orchestrator | changed: [testbed-node-1] => { 2026-02-20 04:25:32.235790 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:25:32.235801 | orchestrator | } 2026-02-20 04:25:32.235812 | orchestrator | changed: [testbed-node-2] => { 2026-02-20 04:25:32.235822 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:25:32.235833 | orchestrator | } 2026-02-20 04:25:32.235860 | orchestrator | 2026-02-20 04:25:32.235872 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-20 04:25:32.235893 | orchestrator | Friday 20 February 2026 04:25:29 +0000 (0:00:01.316) 0:01:36.446 ******* 2026-02-20 04:25:32.235906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-20 04:25:32.235963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 04:25:32.235978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:25:32.235992 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:25:32.236005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-20 04:25:32.236018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 04:25:32.236031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:25:32.236044 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:25:32.236058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-20 04:25:32.236076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 04:25:32.236105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:25:37.543667 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:25:37.543812 | orchestrator | 2026-02-20 04:25:37.543841 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-20 04:25:37.543863 | orchestrator | Friday 20 February 2026 04:25:32 +0000 (0:00:02.712) 0:01:39.158 ******* 2026-02-20 04:25:37.543883 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:25:37.543901 | orchestrator | 2026-02-20 04:25:37.543920 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-20 04:25:37.543939 | orchestrator | Friday 20 February 2026 04:25:34 +0000 (0:00:01.887) 0:01:41.046 ******* 2026-02-20 04:25:37.543965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:25:37.543991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 04:25:37.544013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 04:25:37.544079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 04:25:37.544124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:25:37.544144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 04:25:37.544164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 04:25:37.544181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 04:25:37.544206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:25:37.544237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 04:25:37.544266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 04:25:39.228936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 04:25:39.229026 | orchestrator | 2026-02-20 04:25:39.229044 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-20 04:25:39.229060 | orchestrator | Friday 20 February 2026 04:25:38 +0000 (0:00:04.499) 0:01:45.545 ******* 2026-02-20 04:25:39.229076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:25:39.229093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 04:25:39.229142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 04:25:39.229156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 04:25:39.229169 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:25:39.229200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:25:39.229215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 04:25:39.229228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 04:25:39.229241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 04:25:39.229262 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:25:39.229291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:25:39.229305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-20 04:25:39.229325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-20 04:25:53.854223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-20 04:25:53.854351 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:25:53.854368 | orchestrator | 2026-02-20 04:25:53.854377 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-20 04:25:53.854388 | orchestrator | Friday 20 February 2026 04:25:40 +0000 (0:00:01.735) 0:01:47.280 ******* 2026-02-20 04:25:53.854397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:25:53.854409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:25:53.854506 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:25:53.854518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:25:53.854526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:25:53.854535 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:25:53.854543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:25:53.854564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:25:53.854577 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:25:53.854591 | orchestrator | 2026-02-20 04:25:53.854605 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-20 04:25:53.854618 | orchestrator | Friday 20 February 2026 04:25:42 +0000 (0:00:02.124) 0:01:49.405 ******* 2026-02-20 04:25:53.854631 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:25:53.854646 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:25:53.854659 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:25:53.854673 | orchestrator | 2026-02-20 04:25:53.854686 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-20 04:25:53.854700 | orchestrator | Friday 20 February 2026 04:25:44 +0000 (0:00:02.360) 0:01:51.765 ******* 2026-02-20 04:25:53.854711 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:25:53.854719 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:25:53.854733 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:25:53.854753 | orchestrator | 2026-02-20 04:25:53.854767 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-20 04:25:53.854781 | orchestrator | Friday 20 February 2026 04:25:47 +0000 (0:00:02.809) 0:01:54.574 ******* 2026-02-20 04:25:53.854794 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:25:53.854809 | orchestrator | 2026-02-20 04:25:53.854821 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-20 04:25:53.854834 | orchestrator | Friday 20 February 2026 04:25:49 +0000 (0:00:01.623) 0:01:56.198 ******* 2026-02-20 04:25:53.854878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:25:53.854898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 04:25:53.854927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:25:53.854955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:25:53.854976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 04:25:53.854990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:25:53.855017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:25:55.459197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 04:25:55.459286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:25:55.459300 | orchestrator | 2026-02-20 04:25:55.459326 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-20 04:25:55.459338 | orchestrator | Friday 20 February 2026 04:25:53 +0000 (0:00:04.575) 0:02:00.773 ******* 2026-02-20 04:25:55.459350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:25:55.459362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 04:25:55.459391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:25:55.459402 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:25:55.459429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:25:55.459537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 04:25:55.459548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:25:55.459557 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:25:55.459567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:25:55.459585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-20 04:25:55.459602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:26:11.892179 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:26:11.892261 | orchestrator | 2026-02-20 04:26:11.892281 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-20 04:26:11.892290 | orchestrator | Friday 20 February 2026 04:25:55 +0000 (0:00:01.613) 0:02:02.387 ******* 2026-02-20 04:26:11.892305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:26:11.892317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:26:11.892326 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:26:11.892347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:26:11.892352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:26:11.892357 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:26:11.892361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:26:11.892365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:26:11.892369 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:26:11.892373 | orchestrator | 2026-02-20 04:26:11.892377 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-20 04:26:11.892396 | orchestrator | Friday 20 February 2026 04:25:57 +0000 (0:00:01.905) 0:02:04.292 ******* 2026-02-20 04:26:11.892400 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:26:11.892405 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:26:11.892409 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:26:11.892413 | orchestrator | 2026-02-20 04:26:11.892417 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-20 04:26:11.892421 | orchestrator | Friday 20 February 2026 04:25:59 +0000 (0:00:02.420) 0:02:06.713 ******* 2026-02-20 04:26:11.892424 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:26:11.892457 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:26:11.892461 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:26:11.892465 | orchestrator | 2026-02-20 04:26:11.892469 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-20 04:26:11.892473 | orchestrator | Friday 20 February 2026 04:26:02 +0000 (0:00:02.851) 0:02:09.564 ******* 2026-02-20 04:26:11.892476 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:26:11.892480 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:26:11.892484 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:26:11.892488 | orchestrator | 2026-02-20 04:26:11.892492 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-20 04:26:11.892496 | orchestrator | Friday 20 February 2026 04:26:03 +0000 (0:00:01.360) 0:02:10.925 ******* 2026-02-20 04:26:11.892499 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:26:11.892503 | orchestrator | 2026-02-20 04:26:11.892507 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-20 04:26:11.892511 | orchestrator | Friday 20 February 2026 04:26:05 +0000 (0:00:01.649) 0:02:12.574 ******* 2026-02-20 04:26:11.892516 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-20 04:26:11.892536 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-20 04:26:11.892541 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-20 04:26:11.892548 | orchestrator | 2026-02-20 04:26:11.892552 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-20 04:26:11.892557 | orchestrator | Friday 20 February 2026 04:26:09 +0000 (0:00:03.618) 0:02:16.192 ******* 2026-02-20 04:26:11.892565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-20 04:26:11.892569 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:26:11.892573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-20 04:26:11.892577 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:26:11.892585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-20 04:26:23.743701 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:26:23.743853 | orchestrator | 2026-02-20 04:26:23.743882 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-20 04:26:23.743904 | orchestrator | Friday 20 February 2026 04:26:11 +0000 (0:00:02.626) 0:02:18.819 ******* 2026-02-20 04:26:23.743928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-20 04:26:23.743972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-20 04:26:23.744022 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:26:23.744044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-20 04:26:23.744065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-20 04:26:23.744084 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:26:23.744102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-20 04:26:23.744120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-20 04:26:23.744140 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:26:23.744160 | orchestrator | 2026-02-20 04:26:23.744179 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-20 04:26:23.744200 | orchestrator | Friday 20 February 2026 04:26:14 +0000 (0:00:02.687) 0:02:21.507 ******* 2026-02-20 04:26:23.744219 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:26:23.744237 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:26:23.744257 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:26:23.744276 | orchestrator | 2026-02-20 04:26:23.744297 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-20 04:26:23.744317 | orchestrator | Friday 20 February 2026 04:26:16 +0000 (0:00:01.463) 0:02:22.970 ******* 2026-02-20 04:26:23.744337 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:26:23.744356 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:26:23.744375 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:26:23.744394 | orchestrator | 2026-02-20 04:26:23.744413 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-20 04:26:23.744532 | orchestrator | Friday 20 February 2026 04:26:18 +0000 (0:00:02.388) 0:02:25.359 ******* 2026-02-20 04:26:23.744554 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:26:23.744571 | orchestrator | 2026-02-20 04:26:23.744588 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-20 04:26:23.744604 | orchestrator | Friday 20 February 2026 04:26:20 +0000 (0:00:01.751) 0:02:27.111 ******* 2026-02-20 04:26:23.744669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:26:23.744711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:26:23.744733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:26:23.744754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:26:23.744773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 04:26:23.744817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 04:26:25.804569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 04:26:25.804692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 04:26:25.804710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:26:25.804726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:26:25.804739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 04:26:25.804802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 04:26:25.804816 | orchestrator | 2026-02-20 04:26:25.804830 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-20 04:26:25.804843 | orchestrator | Friday 20 February 2026 04:26:24 +0000 (0:00:04.670) 0:02:31.782 ******* 2026-02-20 04:26:25.804857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:26:25.804870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:26:25.804888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 04:26:25.804919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 04:26:25.804939 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:26:25.804981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:26:37.032656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:26:37.032767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 04:26:37.032783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 04:26:37.032822 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:26:37.032838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:26:37.032864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:26:37.032892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-20 04:26:37.032903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-20 04:26:37.032913 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:26:37.032924 | orchestrator | 2026-02-20 04:26:37.032935 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-20 04:26:37.032946 | orchestrator | Friday 20 February 2026 04:26:26 +0000 (0:00:02.077) 0:02:33.859 ******* 2026-02-20 04:26:37.032956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:26:37.032969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:26:37.032988 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:26:37.032998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:26:37.033008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:26:37.033018 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:26:37.033028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:26:37.033038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:26:37.033048 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:26:37.033058 | orchestrator | 2026-02-20 04:26:37.033068 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-20 04:26:37.033077 | orchestrator | Friday 20 February 2026 04:26:28 +0000 (0:00:01.938) 0:02:35.798 ******* 2026-02-20 04:26:37.033087 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:26:37.033099 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:26:37.033108 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:26:37.033118 | orchestrator | 2026-02-20 04:26:37.033127 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-20 04:26:37.033137 | orchestrator | Friday 20 February 2026 04:26:31 +0000 (0:00:02.367) 0:02:38.165 ******* 2026-02-20 04:26:37.033147 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:26:37.033156 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:26:37.033166 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:26:37.033175 | orchestrator | 2026-02-20 04:26:37.033190 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-20 04:26:37.033202 | orchestrator | Friday 20 February 2026 04:26:34 +0000 (0:00:02.872) 0:02:41.038 ******* 2026-02-20 04:26:37.033213 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:26:37.033225 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:26:37.033236 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:26:37.033248 | orchestrator | 2026-02-20 04:26:37.033259 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-20 04:26:37.033271 | orchestrator | Friday 20 February 2026 04:26:35 +0000 (0:00:01.555) 0:02:42.593 ******* 2026-02-20 04:26:37.033282 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:26:37.033294 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:26:37.033311 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:26:42.240774 | orchestrator | 2026-02-20 04:26:42.240865 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-20 04:26:42.240877 | orchestrator | Friday 20 February 2026 04:26:37 +0000 (0:00:01.369) 0:02:43.962 ******* 2026-02-20 04:26:42.240885 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:26:42.240893 | orchestrator | 2026-02-20 04:26:42.240900 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-20 04:26:42.240908 | orchestrator | Friday 20 February 2026 04:26:38 +0000 (0:00:01.735) 0:02:45.698 ******* 2026-02-20 04:26:42.240919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:26:42.240950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 04:26:42.240959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 04:26:42.240968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 04:26:42.240987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 04:26:42.241010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:26:42.241024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-20 04:26:42.241032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:26:42.241041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 04:26:42.241048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 04:26:42.241060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 04:26:42.241073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 04:26:44.326966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:26:44.327059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-20 04:26:44.327073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:26:44.327117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 04:26:44.327137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 04:26:44.327183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 04:26:44.327193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 04:26:44.327213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:26:44.327230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-20 04:26:44.327239 | orchestrator | 2026-02-20 04:26:44.327249 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-20 04:26:44.327259 | orchestrator | Friday 20 February 2026 04:26:43 +0000 (0:00:04.926) 0:02:50.624 ******* 2026-02-20 04:26:44.327284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:26:44.327301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 04:26:44.327331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 04:26:45.642877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 04:26:45.642983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 04:26:45.643000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:26:45.643012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-20 04:26:45.643027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:26:45.643088 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:26:45.643138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 04:26:45.643160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 04:26:45.643181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 04:26:45.643202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 04:26:45.643970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:26:45.644016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-20 04:26:45.644044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:27:00.698191 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:27:00.698330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-20 04:27:00.698354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-20 04:27:00.698406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-20 04:27:00.698452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-20 04:27:00.698511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:27:00.698525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-20 04:27:00.698538 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:27:00.698550 | orchestrator | 2026-02-20 04:27:00.698563 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-20 04:27:00.698575 | orchestrator | Friday 20 February 2026 04:26:45 +0000 (0:00:01.951) 0:02:52.575 ******* 2026-02-20 04:27:00.698612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:27:00.698628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:27:00.698641 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:27:00.698653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:27:00.698664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:27:00.698676 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:27:00.698688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:27:00.698700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:27:00.698711 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:27:00.698722 | orchestrator | 2026-02-20 04:27:00.698734 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-20 04:27:00.698754 | orchestrator | Friday 20 February 2026 04:26:47 +0000 (0:00:02.037) 0:02:54.613 ******* 2026-02-20 04:27:00.698765 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:27:00.698777 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:27:00.698788 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:27:00.698799 | orchestrator | 2026-02-20 04:27:00.698811 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-20 04:27:00.698822 | orchestrator | Friday 20 February 2026 04:26:50 +0000 (0:00:02.418) 0:02:57.032 ******* 2026-02-20 04:27:00.698833 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:27:00.698844 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:27:00.698855 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:27:00.698866 | orchestrator | 2026-02-20 04:27:00.698877 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-20 04:27:00.698889 | orchestrator | Friday 20 February 2026 04:26:52 +0000 (0:00:02.837) 0:02:59.869 ******* 2026-02-20 04:27:00.698900 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:27:00.698911 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:27:00.698922 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:27:00.698934 | orchestrator | 2026-02-20 04:27:00.698945 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-20 04:27:00.698956 | orchestrator | Friday 20 February 2026 04:26:54 +0000 (0:00:01.430) 0:03:01.300 ******* 2026-02-20 04:27:00.698967 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:27:00.698978 | orchestrator | 2026-02-20 04:27:00.698994 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-20 04:27:00.699006 | orchestrator | Friday 20 February 2026 04:26:56 +0000 (0:00:01.914) 0:03:03.215 ******* 2026-02-20 04:27:00.699030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 04:27:01.801710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-20 04:27:01.801862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 04:27:01.801903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-20 04:27:01.801932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-20 04:27:01.801955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-20 04:27:05.171767 | orchestrator | 2026-02-20 04:27:05.171867 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-20 04:27:05.171883 | orchestrator | Friday 20 February 2026 04:27:01 +0000 (0:00:05.521) 0:03:08.736 ******* 2026-02-20 04:27:05.171918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-20 04:27:05.171936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-20 04:27:05.171970 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:27:05.172010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-20 04:27:05.172025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-20 04:27:05.172045 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:27:05.172072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-20 04:27:23.300481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-20 04:27:23.300631 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:27:23.300651 | orchestrator | 2026-02-20 04:27:23.300664 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-20 04:27:23.300677 | orchestrator | Friday 20 February 2026 04:27:06 +0000 (0:00:04.517) 0:03:13.254 ******* 2026-02-20 04:27:23.300690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-20 04:27:23.300705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-20 04:27:23.300717 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:27:23.300729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-20 04:27:23.300775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-20 04:27:23.300789 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:27:23.300801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-20 04:27:23.300813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-20 04:27:23.300834 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:27:23.300845 | orchestrator | 2026-02-20 04:27:23.300857 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-20 04:27:23.300868 | orchestrator | Friday 20 February 2026 04:27:10 +0000 (0:00:04.270) 0:03:17.525 ******* 2026-02-20 04:27:23.300879 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:27:23.300891 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:27:23.300902 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:27:23.300913 | orchestrator | 2026-02-20 04:27:23.300924 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-20 04:27:23.300936 | orchestrator | Friday 20 February 2026 04:27:12 +0000 (0:00:02.186) 0:03:19.711 ******* 2026-02-20 04:27:23.300956 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:27:23.300974 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:27:23.301005 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:27:23.301024 | orchestrator | 2026-02-20 04:27:23.301042 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-20 04:27:23.301061 | orchestrator | Friday 20 February 2026 04:27:15 +0000 (0:00:02.605) 0:03:22.317 ******* 2026-02-20 04:27:23.301080 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:27:23.301098 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:27:23.301117 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:27:23.301137 | orchestrator | 2026-02-20 04:27:23.301156 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-20 04:27:23.301176 | orchestrator | Friday 20 February 2026 04:27:16 +0000 (0:00:01.349) 0:03:23.667 ******* 2026-02-20 04:27:23.301198 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:27:23.301218 | orchestrator | 2026-02-20 04:27:23.301237 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-20 04:27:23.301248 | orchestrator | Friday 20 February 2026 04:27:18 +0000 (0:00:01.677) 0:03:25.344 ******* 2026-02-20 04:27:23.301261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:27:23.301309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:27:39.988215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:27:39.988343 | orchestrator | 2026-02-20 04:27:39.988357 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-20 04:27:39.988368 | orchestrator | Friday 20 February 2026 04:27:23 +0000 (0:00:04.877) 0:03:30.222 ******* 2026-02-20 04:27:39.988379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:27:39.988389 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:27:39.988401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:27:39.988457 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:27:39.988467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:27:39.988477 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:27:39.988486 | orchestrator | 2026-02-20 04:27:39.988495 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-20 04:27:39.988504 | orchestrator | Friday 20 February 2026 04:27:25 +0000 (0:00:01.735) 0:03:31.957 ******* 2026-02-20 04:27:39.988514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:27:39.988540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:27:39.988559 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:27:39.988584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:27:39.988595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:27:39.988604 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:27:39.988613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:27:39.988622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:27:39.988631 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:27:39.988640 | orchestrator | 2026-02-20 04:27:39.988649 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-20 04:27:39.988658 | orchestrator | Friday 20 February 2026 04:27:26 +0000 (0:00:01.584) 0:03:33.542 ******* 2026-02-20 04:27:39.988667 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:27:39.988677 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:27:39.988686 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:27:39.988695 | orchestrator | 2026-02-20 04:27:39.988704 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-20 04:27:39.988712 | orchestrator | Friday 20 February 2026 04:27:28 +0000 (0:00:02.384) 0:03:35.927 ******* 2026-02-20 04:27:39.988721 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:27:39.988730 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:27:39.988739 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:27:39.988748 | orchestrator | 2026-02-20 04:27:39.988757 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-20 04:27:39.988767 | orchestrator | Friday 20 February 2026 04:27:32 +0000 (0:00:03.035) 0:03:38.963 ******* 2026-02-20 04:27:39.988778 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:27:39.988789 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:27:39.988799 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:27:39.988809 | orchestrator | 2026-02-20 04:27:39.988819 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-20 04:27:39.988830 | orchestrator | Friday 20 February 2026 04:27:33 +0000 (0:00:01.399) 0:03:40.362 ******* 2026-02-20 04:27:39.988840 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:27:39.988850 | orchestrator | 2026-02-20 04:27:39.988861 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-20 04:27:39.988869 | orchestrator | Friday 20 February 2026 04:27:35 +0000 (0:00:01.848) 0:03:42.210 ******* 2026-02-20 04:27:39.988894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-20 04:27:41.662344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-20 04:27:41.662495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-20 04:27:41.662527 | orchestrator | 2026-02-20 04:27:41.662536 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-20 04:27:41.662543 | orchestrator | Friday 20 February 2026 04:27:39 +0000 (0:00:04.707) 0:03:46.918 ******* 2026-02-20 04:27:41.662552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-20 04:27:41.662569 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:27:41.662588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-20 04:27:50.413596 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:27:50.413732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-20 04:27:50.413778 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:27:50.413803 | orchestrator | 2026-02-20 04:27:50.413816 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-20 04:27:50.413829 | orchestrator | Friday 20 February 2026 04:27:41 +0000 (0:00:01.676) 0:03:48.595 ******* 2026-02-20 04:27:50.413920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-20 04:27:50.413943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-20 04:27:50.413957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-20 04:27:50.413970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-20 04:27:50.413982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-20 04:27:50.413994 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:27:50.414122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-20 04:27:50.414141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-20 04:27:50.414155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-20 04:27:50.414168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-20 04:27:50.414195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-20 04:27:50.414214 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:27:50.414234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-20 04:27:50.414254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-20 04:27:50.414283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-20 04:27:50.414303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-20 04:27:50.414322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-20 04:27:50.414341 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:27:50.414361 | orchestrator | 2026-02-20 04:27:50.414382 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-20 04:27:50.414401 | orchestrator | Friday 20 February 2026 04:27:43 +0000 (0:00:01.936) 0:03:50.531 ******* 2026-02-20 04:27:50.414452 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:27:50.414473 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:27:50.414491 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:27:50.414510 | orchestrator | 2026-02-20 04:27:50.414529 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-20 04:27:50.414542 | orchestrator | Friday 20 February 2026 04:27:45 +0000 (0:00:02.309) 0:03:52.841 ******* 2026-02-20 04:27:50.414553 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:27:50.414563 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:27:50.414574 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:27:50.414585 | orchestrator | 2026-02-20 04:27:50.414596 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-20 04:27:50.414607 | orchestrator | Friday 20 February 2026 04:27:48 +0000 (0:00:02.934) 0:03:55.775 ******* 2026-02-20 04:27:50.414618 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:27:50.414630 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:27:50.414641 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:27:50.414652 | orchestrator | 2026-02-20 04:27:50.414663 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-20 04:27:50.414674 | orchestrator | Friday 20 February 2026 04:27:50 +0000 (0:00:01.352) 0:03:57.127 ******* 2026-02-20 04:27:50.414696 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:28:00.335393 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:28:00.335523 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:28:00.335556 | orchestrator | 2026-02-20 04:28:00.335563 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-20 04:28:00.335568 | orchestrator | Friday 20 February 2026 04:27:51 +0000 (0:00:01.342) 0:03:58.470 ******* 2026-02-20 04:28:00.335572 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:28:00.335577 | orchestrator | 2026-02-20 04:28:00.335580 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-20 04:28:00.335584 | orchestrator | Friday 20 February 2026 04:27:53 +0000 (0:00:01.950) 0:04:00.421 ******* 2026-02-20 04:28:00.335593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-20 04:28:00.335600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 04:28:00.335622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 04:28:00.335627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-20 04:28:00.335647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 04:28:00.335652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 04:28:00.335656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-20 04:28:00.335664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 04:28:00.335668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 04:28:00.335672 | orchestrator | 2026-02-20 04:28:00.335676 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-20 04:28:00.335682 | orchestrator | Friday 20 February 2026 04:27:58 +0000 (0:00:04.875) 0:04:05.297 ******* 2026-02-20 04:28:00.335694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-20 04:28:01.995829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 04:28:01.995915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 04:28:01.995926 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:28:01.995952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-20 04:28:01.995961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 04:28:01.995985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 04:28:01.995993 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:28:01.996015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-20 04:28:01.996023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-20 04:28:01.996035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-20 04:28:01.996042 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:28:01.996049 | orchestrator | 2026-02-20 04:28:01.996057 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-20 04:28:01.996065 | orchestrator | Friday 20 February 2026 04:28:00 +0000 (0:00:01.967) 0:04:07.264 ******* 2026-02-20 04:28:01.996074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-20 04:28:01.996083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-20 04:28:01.996097 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:28:01.996104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-20 04:28:01.996111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-20 04:28:01.996118 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:28:01.996125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-20 04:28:01.996132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-20 04:28:01.996139 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:28:01.996145 | orchestrator | 2026-02-20 04:28:01.996152 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-20 04:28:01.996164 | orchestrator | Friday 20 February 2026 04:28:01 +0000 (0:00:01.655) 0:04:08.920 ******* 2026-02-20 04:28:17.262358 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:28:17.262530 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:28:17.262548 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:28:17.262560 | orchestrator | 2026-02-20 04:28:17.262571 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-20 04:28:17.262583 | orchestrator | Friday 20 February 2026 04:28:04 +0000 (0:00:02.343) 0:04:11.264 ******* 2026-02-20 04:28:17.262593 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:28:17.262603 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:28:17.262613 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:28:17.262623 | orchestrator | 2026-02-20 04:28:17.262633 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-20 04:28:17.262643 | orchestrator | Friday 20 February 2026 04:28:07 +0000 (0:00:03.102) 0:04:14.366 ******* 2026-02-20 04:28:17.262653 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:28:17.262664 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:28:17.262673 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:28:17.262683 | orchestrator | 2026-02-20 04:28:17.262693 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-20 04:28:17.262704 | orchestrator | Friday 20 February 2026 04:28:08 +0000 (0:00:01.399) 0:04:15.766 ******* 2026-02-20 04:28:17.262714 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:28:17.262724 | orchestrator | 2026-02-20 04:28:17.262734 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-20 04:28:17.262744 | orchestrator | Friday 20 February 2026 04:28:10 +0000 (0:00:01.814) 0:04:17.581 ******* 2026-02-20 04:28:17.262776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:28:17.262815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 04:28:17.262828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:28:17.262858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 04:28:17.262870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:28:17.262893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 04:28:17.262906 | orchestrator | 2026-02-20 04:28:17.262917 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-20 04:28:17.262929 | orchestrator | Friday 20 February 2026 04:28:15 +0000 (0:00:04.931) 0:04:22.512 ******* 2026-02-20 04:28:17.262941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:28:17.262961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 04:28:30.462524 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:28:30.462685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:28:30.462775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 04:28:30.462803 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:28:30.462825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:28:30.462848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-20 04:28:30.462861 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:28:30.462873 | orchestrator | 2026-02-20 04:28:30.462885 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-20 04:28:30.462898 | orchestrator | Friday 20 February 2026 04:28:17 +0000 (0:00:01.677) 0:04:24.190 ******* 2026-02-20 04:28:30.462932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:28:30.462950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:28:30.462965 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:28:30.462978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:28:30.462991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:28:30.463012 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:28:30.463026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:28:30.463039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:28:30.463052 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:28:30.463064 | orchestrator | 2026-02-20 04:28:30.463077 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-20 04:28:30.463096 | orchestrator | Friday 20 February 2026 04:28:19 +0000 (0:00:01.886) 0:04:26.077 ******* 2026-02-20 04:28:30.463109 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:28:30.463123 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:28:30.463135 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:28:30.463148 | orchestrator | 2026-02-20 04:28:30.463161 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-20 04:28:30.463174 | orchestrator | Friday 20 February 2026 04:28:21 +0000 (0:00:02.351) 0:04:28.428 ******* 2026-02-20 04:28:30.463187 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:28:30.463200 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:28:30.463212 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:28:30.463224 | orchestrator | 2026-02-20 04:28:30.463244 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-20 04:28:30.463271 | orchestrator | Friday 20 February 2026 04:28:24 +0000 (0:00:03.009) 0:04:31.438 ******* 2026-02-20 04:28:30.463291 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:28:30.463309 | orchestrator | 2026-02-20 04:28:30.463327 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-20 04:28:30.463345 | orchestrator | Friday 20 February 2026 04:28:26 +0000 (0:00:02.150) 0:04:33.588 ******* 2026-02-20 04:28:30.463365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:28:30.463385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:28:30.463522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 04:28:32.140233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 04:28:32.140352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:28:32.140366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:28:32.140374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 04:28:32.140382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 04:28:32.140469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:28:32.140483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:28:32.140490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 04:28:32.140497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 04:28:32.140505 | orchestrator | 2026-02-20 04:28:32.140513 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-20 04:28:32.140521 | orchestrator | Friday 20 February 2026 04:28:31 +0000 (0:00:04.876) 0:04:38.465 ******* 2026-02-20 04:28:32.140530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:28:32.140547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:28:35.242975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 04:28:35.243103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 04:28:35.243121 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:28:35.243137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:28:35.243150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:28:35.243186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 04:28:35.243217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 04:28:35.243230 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:28:35.243247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:28:35.243259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:28:35.243271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-20 04:28:35.243282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-20 04:28:35.243303 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:28:35.243315 | orchestrator | 2026-02-20 04:28:35.243328 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-20 04:28:35.243340 | orchestrator | Friday 20 February 2026 04:28:33 +0000 (0:00:01.711) 0:04:40.176 ******* 2026-02-20 04:28:35.243353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:28:35.243368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:28:35.243381 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:28:35.243392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:28:35.243468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:28:50.526742 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:28:50.526871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:28:50.526891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:28:50.526906 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:28:50.526918 | orchestrator | 2026-02-20 04:28:50.526930 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-20 04:28:50.526960 | orchestrator | Friday 20 February 2026 04:28:35 +0000 (0:00:01.991) 0:04:42.168 ******* 2026-02-20 04:28:50.526972 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:28:50.526984 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:28:50.526995 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:28:50.527006 | orchestrator | 2026-02-20 04:28:50.527018 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-20 04:28:50.527029 | orchestrator | Friday 20 February 2026 04:28:37 +0000 (0:00:02.258) 0:04:44.427 ******* 2026-02-20 04:28:50.527041 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:28:50.527052 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:28:50.527063 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:28:50.527074 | orchestrator | 2026-02-20 04:28:50.527085 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-20 04:28:50.527096 | orchestrator | Friday 20 February 2026 04:28:40 +0000 (0:00:03.063) 0:04:47.490 ******* 2026-02-20 04:28:50.527108 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:28:50.527119 | orchestrator | 2026-02-20 04:28:50.527130 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-20 04:28:50.527141 | orchestrator | Friday 20 February 2026 04:28:43 +0000 (0:00:02.544) 0:04:50.035 ******* 2026-02-20 04:28:50.527178 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 04:28:50.527190 | orchestrator | 2026-02-20 04:28:50.527201 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-20 04:28:50.527212 | orchestrator | Friday 20 February 2026 04:28:47 +0000 (0:00:03.957) 0:04:53.992 ******* 2026-02-20 04:28:50.527229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:28:50.527266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-20 04:28:50.527281 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:28:50.527296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:28:50.527320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-20 04:28:50.527333 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:28:50.527356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:28:54.142493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-20 04:28:54.142653 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:28:54.142706 | orchestrator | 2026-02-20 04:28:54.142718 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-20 04:28:54.142730 | orchestrator | Friday 20 February 2026 04:28:50 +0000 (0:00:03.458) 0:04:57.450 ******* 2026-02-20 04:28:54.142744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:28:54.142758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-20 04:28:54.142769 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:28:54.142808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:28:54.142828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-20 04:28:54.142839 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:28:54.142850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:28:54.142870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-20 04:29:10.006465 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:29:10.006559 | orchestrator | 2026-02-20 04:29:10.006570 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-20 04:29:10.006611 | orchestrator | Friday 20 February 2026 04:28:54 +0000 (0:00:03.615) 0:05:01.066 ******* 2026-02-20 04:29:10.006621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-20 04:29:10.006632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-20 04:29:10.006640 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:29:10.006647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-20 04:29:10.006654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-20 04:29:10.006661 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:29:10.006669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-20 04:29:10.006678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-20 04:29:10.006690 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:29:10.006701 | orchestrator | 2026-02-20 04:29:10.006712 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-20 04:29:10.006732 | orchestrator | Friday 20 February 2026 04:28:58 +0000 (0:00:03.971) 0:05:05.037 ******* 2026-02-20 04:29:10.006744 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:29:10.006773 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:29:10.006786 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:29:10.006796 | orchestrator | 2026-02-20 04:29:10.006809 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-20 04:29:10.006828 | orchestrator | Friday 20 February 2026 04:29:01 +0000 (0:00:03.060) 0:05:08.097 ******* 2026-02-20 04:29:10.006837 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:29:10.006844 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:29:10.006851 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:29:10.006858 | orchestrator | 2026-02-20 04:29:10.006865 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-20 04:29:10.006871 | orchestrator | Friday 20 February 2026 04:29:03 +0000 (0:00:02.592) 0:05:10.690 ******* 2026-02-20 04:29:10.006878 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:29:10.006885 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:29:10.006892 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:29:10.006899 | orchestrator | 2026-02-20 04:29:10.006905 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-20 04:29:10.006912 | orchestrator | Friday 20 February 2026 04:29:05 +0000 (0:00:01.352) 0:05:12.042 ******* 2026-02-20 04:29:10.006919 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:29:10.006926 | orchestrator | 2026-02-20 04:29:10.006932 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-20 04:29:10.006939 | orchestrator | Friday 20 February 2026 04:29:07 +0000 (0:00:02.155) 0:05:14.198 ******* 2026-02-20 04:29:10.006947 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-20 04:29:10.006956 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-20 04:29:10.006963 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-20 04:29:10.006975 | orchestrator | 2026-02-20 04:29:10.006983 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-20 04:29:10.006990 | orchestrator | Friday 20 February 2026 04:29:09 +0000 (0:00:02.595) 0:05:16.794 ******* 2026-02-20 04:29:10.007004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-20 04:29:24.469278 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:29:24.469380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-20 04:29:24.469394 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:29:24.469444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-20 04:29:24.469452 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:29:24.469459 | orchestrator | 2026-02-20 04:29:24.469466 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-20 04:29:24.469474 | orchestrator | Friday 20 February 2026 04:29:11 +0000 (0:00:01.782) 0:05:18.577 ******* 2026-02-20 04:29:24.469482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-20 04:29:24.469490 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:29:24.469496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-20 04:29:24.469503 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:29:24.469509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-20 04:29:24.469534 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:29:24.469540 | orchestrator | 2026-02-20 04:29:24.469547 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-20 04:29:24.469553 | orchestrator | Friday 20 February 2026 04:29:13 +0000 (0:00:01.479) 0:05:20.057 ******* 2026-02-20 04:29:24.469569 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:29:24.469575 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:29:24.469582 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:29:24.469588 | orchestrator | 2026-02-20 04:29:24.469594 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-20 04:29:24.469601 | orchestrator | Friday 20 February 2026 04:29:14 +0000 (0:00:01.421) 0:05:21.479 ******* 2026-02-20 04:29:24.469607 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:29:24.469613 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:29:24.469620 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:29:24.469626 | orchestrator | 2026-02-20 04:29:24.469632 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-20 04:29:24.469639 | orchestrator | Friday 20 February 2026 04:29:16 +0000 (0:00:02.377) 0:05:23.856 ******* 2026-02-20 04:29:24.469645 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:29:24.469651 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:29:24.469657 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:29:24.469663 | orchestrator | 2026-02-20 04:29:24.469677 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-20 04:29:24.469684 | orchestrator | Friday 20 February 2026 04:29:18 +0000 (0:00:01.342) 0:05:25.199 ******* 2026-02-20 04:29:24.469690 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:29:24.469697 | orchestrator | 2026-02-20 04:29:24.469703 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-20 04:29:24.469709 | orchestrator | Friday 20 February 2026 04:29:20 +0000 (0:00:01.959) 0:05:27.159 ******* 2026-02-20 04:29:24.469739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:29:24.469750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:24.469763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-20 04:29:24.469770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-20 04:29:24.469788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:24.635994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:24.636072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:24.636084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 04:29:24.636109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 04:29:24.636119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:24.636127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-20 04:29:24.636155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:24.636164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:24.636173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:29:24.636186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-20 04:29:24.636194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-20 04:29:24.636210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:24.979054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-20 04:29:24.979191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-20 04:29:24.979212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:24.979227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:24.979255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:24.979290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 04:29:24.979303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 04:29:24.979337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:24.979350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-20 04:29:24.979362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:24.979379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:24.979487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-20 04:29:25.167436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-20 04:29:25.167528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:29:25.167541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:25.167564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-20 04:29:25.167589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-20 04:29:25.167617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:25.167626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:25.167635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:25.167643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 04:29:25.167654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 04:29:25.167673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:27.610270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-20 04:29:27.610382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:27.610459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:27.610498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-20 04:29:27.610516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-20 04:29:27.610563 | orchestrator | 2026-02-20 04:29:27.610586 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-20 04:29:27.610608 | orchestrator | Friday 20 February 2026 04:29:26 +0000 (0:00:06.081) 0:05:33.240 ******* 2026-02-20 04:29:27.610658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:29:27.610681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:27.610695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-20 04:29:27.610714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-20 04:29:27.610742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:27.721074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:27.721178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:27.721194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 04:29:27.721208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 04:29:27.721239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:27.721281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-20 04:29:27.721313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:27.721327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:29:27.721341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:27.721358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:27.721379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-20 04:29:27.721449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-20 04:29:27.798524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-20 04:29:27.798628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-20 04:29:27.798646 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:29:27.798685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:27.798699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:29:27.798712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:27.798744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:27.798757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:27.798769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 04:29:27.798832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-20 04:29:27.798847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-20 04:29:27.798870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 04:29:27.921823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:27.921926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:27.921991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-20 04:29:27.922014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:27.922131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:27.922144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:27.922175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:27.922186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-20 04:29:27.922215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-20 04:29:27.922230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-20 04:29:27.922241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:27.922251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-20 04:29:27.922262 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:29:27.922284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-20 04:29:44.883302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-20 04:29:44.883577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-20 04:29:44.883614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-20 04:29:44.883639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-20 04:29:44.883660 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:29:44.883682 | orchestrator | 2026-02-20 04:29:44.883702 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-20 04:29:44.883722 | orchestrator | Friday 20 February 2026 04:29:29 +0000 (0:00:02.790) 0:05:36.031 ******* 2026-02-20 04:29:44.883740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:29:44.883762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:29:44.883782 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:29:44.883801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:29:44.883861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:29:44.883883 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:29:44.883903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:29:44.883923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:29:44.883942 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:29:44.883961 | orchestrator | 2026-02-20 04:29:44.883981 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-20 04:29:44.884002 | orchestrator | Friday 20 February 2026 04:29:32 +0000 (0:00:03.223) 0:05:39.255 ******* 2026-02-20 04:29:44.884021 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:29:44.884042 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:29:44.884062 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:29:44.884082 | orchestrator | 2026-02-20 04:29:44.884101 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-20 04:29:44.884131 | orchestrator | Friday 20 February 2026 04:29:34 +0000 (0:00:02.461) 0:05:41.717 ******* 2026-02-20 04:29:44.884151 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:29:44.884170 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:29:44.884189 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:29:44.884206 | orchestrator | 2026-02-20 04:29:44.884225 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-20 04:29:44.884243 | orchestrator | Friday 20 February 2026 04:29:37 +0000 (0:00:02.973) 0:05:44.691 ******* 2026-02-20 04:29:44.884262 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:29:44.884280 | orchestrator | 2026-02-20 04:29:44.884299 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-20 04:29:44.884317 | orchestrator | Friday 20 February 2026 04:29:40 +0000 (0:00:02.295) 0:05:46.986 ******* 2026-02-20 04:29:44.884338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-20 04:29:44.884360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-20 04:29:44.884406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-20 04:30:01.916970 | orchestrator | 2026-02-20 04:30:01.917110 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-20 04:30:01.917129 | orchestrator | Friday 20 February 2026 04:29:44 +0000 (0:00:04.824) 0:05:51.811 ******* 2026-02-20 04:30:01.917165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-20 04:30:01.917183 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:30:01.917199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-20 04:30:01.917231 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:30:01.917245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-20 04:30:01.917257 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:30:01.917269 | orchestrator | 2026-02-20 04:30:01.917281 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-20 04:30:01.917292 | orchestrator | Friday 20 February 2026 04:29:46 +0000 (0:00:01.599) 0:05:53.410 ******* 2026-02-20 04:30:01.917305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-20 04:30:01.917338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-20 04:30:01.917353 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:30:01.917364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-20 04:30:01.917383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-20 04:30:01.917394 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:30:01.917406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-20 04:30:01.917417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-20 04:30:01.917458 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:30:01.917473 | orchestrator | 2026-02-20 04:30:01.917484 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-20 04:30:01.917495 | orchestrator | Friday 20 February 2026 04:29:48 +0000 (0:00:01.853) 0:05:55.264 ******* 2026-02-20 04:30:01.917506 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:30:01.917518 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:30:01.917529 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:30:01.917540 | orchestrator | 2026-02-20 04:30:01.917551 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-20 04:30:01.917572 | orchestrator | Friday 20 February 2026 04:29:50 +0000 (0:00:02.353) 0:05:57.617 ******* 2026-02-20 04:30:01.917584 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:30:01.917595 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:30:01.917605 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:30:01.917616 | orchestrator | 2026-02-20 04:30:01.917627 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-20 04:30:01.917639 | orchestrator | Friday 20 February 2026 04:29:53 +0000 (0:00:02.965) 0:06:00.582 ******* 2026-02-20 04:30:01.917650 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:30:01.917661 | orchestrator | 2026-02-20 04:30:01.917672 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-20 04:30:01.917683 | orchestrator | Friday 20 February 2026 04:29:55 +0000 (0:00:02.344) 0:06:02.927 ******* 2026-02-20 04:30:01.917695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:30:01.917719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:30:03.096187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:30:03.096341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:30:03.096360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:30:03.096391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:30:03.096548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:30:03.096569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:30:03.096592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 04:30:03.096606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 04:30:03.096617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:30:03.096629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 04:30:03.096641 | orchestrator | 2026-02-20 04:30:03.096654 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-20 04:30:03.096675 | orchestrator | Friday 20 February 2026 04:30:03 +0000 (0:00:07.097) 0:06:10.025 ******* 2026-02-20 04:30:03.786850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:30:03.786964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:30:03.786977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:30:03.786986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 04:30:03.786995 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:30:03.787024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:30:03.787038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:30:03.787046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:30:03.787053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 04:30:03.787060 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:30:03.787068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:30:03.787085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:30:25.240038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-20 04:30:25.240148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-20 04:30:25.240164 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:30:25.240178 | orchestrator | 2026-02-20 04:30:25.240189 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-20 04:30:25.240201 | orchestrator | Friday 20 February 2026 04:30:04 +0000 (0:00:01.805) 0:06:11.831 ******* 2026-02-20 04:30:25.240212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:30:25.240225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:30:25.240236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:30:25.240248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:30:25.240258 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:30:25.240268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:30:25.240278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:30:25.240313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:30:25.240339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:30:25.240349 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:30:25.240359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:30:25.240386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:30:25.240396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:30:25.240406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:30:25.240416 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:30:25.240426 | orchestrator | 2026-02-20 04:30:25.240436 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-20 04:30:25.240516 | orchestrator | Friday 20 February 2026 04:30:07 +0000 (0:00:02.715) 0:06:14.546 ******* 2026-02-20 04:30:25.240527 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:30:25.240537 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:30:25.240546 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:30:25.240556 | orchestrator | 2026-02-20 04:30:25.240566 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-20 04:30:25.240576 | orchestrator | Friday 20 February 2026 04:30:09 +0000 (0:00:02.378) 0:06:16.924 ******* 2026-02-20 04:30:25.240587 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:30:25.240598 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:30:25.240609 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:30:25.240620 | orchestrator | 2026-02-20 04:30:25.240631 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-20 04:30:25.240643 | orchestrator | Friday 20 February 2026 04:30:13 +0000 (0:00:03.115) 0:06:20.040 ******* 2026-02-20 04:30:25.240654 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:30:25.240665 | orchestrator | 2026-02-20 04:30:25.240677 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-20 04:30:25.240688 | orchestrator | Friday 20 February 2026 04:30:15 +0000 (0:00:02.796) 0:06:22.836 ******* 2026-02-20 04:30:25.240699 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-20 04:30:25.240712 | orchestrator | 2026-02-20 04:30:25.240723 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-20 04:30:25.240734 | orchestrator | Friday 20 February 2026 04:30:17 +0000 (0:00:01.661) 0:06:24.498 ******* 2026-02-20 04:30:25.240748 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-20 04:30:25.240771 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-20 04:30:25.240789 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-20 04:30:25.240801 | orchestrator | 2026-02-20 04:30:25.240812 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-20 04:30:25.240825 | orchestrator | Friday 20 February 2026 04:30:23 +0000 (0:00:05.538) 0:06:30.036 ******* 2026-02-20 04:30:25.240837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 04:30:25.240856 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:30:46.890539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 04:30:46.890655 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:30:46.890672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 04:30:46.890684 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:30:46.890694 | orchestrator | 2026-02-20 04:30:46.890705 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-20 04:30:46.890717 | orchestrator | Friday 20 February 2026 04:30:25 +0000 (0:00:02.135) 0:06:32.171 ******* 2026-02-20 04:30:46.890728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-20 04:30:46.890740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-20 04:30:46.890776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-20 04:30:46.890789 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:30:46.890799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-20 04:30:46.890810 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:30:46.890820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-20 04:30:46.890830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-20 04:30:46.890840 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:30:46.890853 | orchestrator | 2026-02-20 04:30:46.890871 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-20 04:30:46.890882 | orchestrator | Friday 20 February 2026 04:30:27 +0000 (0:00:02.345) 0:06:34.517 ******* 2026-02-20 04:30:46.890891 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:30:46.890902 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:30:46.890912 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:30:46.890921 | orchestrator | 2026-02-20 04:30:46.890935 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-20 04:30:46.890952 | orchestrator | Friday 20 February 2026 04:30:30 +0000 (0:00:03.361) 0:06:37.878 ******* 2026-02-20 04:30:46.890984 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:30:46.891001 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:30:46.891015 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:30:46.891031 | orchestrator | 2026-02-20 04:30:46.891047 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-20 04:30:46.891063 | orchestrator | Friday 20 February 2026 04:30:34 +0000 (0:00:03.950) 0:06:41.829 ******* 2026-02-20 04:30:46.891080 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-20 04:30:46.891100 | orchestrator | 2026-02-20 04:30:46.891118 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-20 04:30:46.891136 | orchestrator | Friday 20 February 2026 04:30:36 +0000 (0:00:01.735) 0:06:43.564 ******* 2026-02-20 04:30:46.891179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 04:30:46.891200 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:30:46.891218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 04:30:46.891248 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:30:46.891266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 04:30:46.891285 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:30:46.891300 | orchestrator | 2026-02-20 04:30:46.891315 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-20 04:30:46.891331 | orchestrator | Friday 20 February 2026 04:30:38 +0000 (0:00:02.334) 0:06:45.899 ******* 2026-02-20 04:30:46.891346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 04:30:46.891363 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:30:46.891380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 04:30:46.891397 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:30:46.891421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-20 04:30:46.891439 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:30:46.891455 | orchestrator | 2026-02-20 04:30:46.891504 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-20 04:30:46.891523 | orchestrator | Friday 20 February 2026 04:30:41 +0000 (0:00:02.400) 0:06:48.300 ******* 2026-02-20 04:30:46.891539 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:30:46.891556 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:30:46.891572 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:30:46.891590 | orchestrator | 2026-02-20 04:30:46.891606 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-20 04:30:46.891622 | orchestrator | Friday 20 February 2026 04:30:43 +0000 (0:00:02.114) 0:06:50.415 ******* 2026-02-20 04:30:46.891639 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:30:46.891656 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:30:46.891672 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:30:46.891687 | orchestrator | 2026-02-20 04:30:46.891697 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-20 04:30:46.891707 | orchestrator | Friday 20 February 2026 04:30:46 +0000 (0:00:03.401) 0:06:53.817 ******* 2026-02-20 04:31:15.422209 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:31:15.422325 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:31:15.422341 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:31:15.422353 | orchestrator | 2026-02-20 04:31:15.422365 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-20 04:31:15.422378 | orchestrator | Friday 20 February 2026 04:30:51 +0000 (0:00:04.135) 0:06:57.953 ******* 2026-02-20 04:31:15.422389 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-20 04:31:15.422401 | orchestrator | 2026-02-20 04:31:15.422413 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-20 04:31:15.422425 | orchestrator | Friday 20 February 2026 04:30:53 +0000 (0:00:02.435) 0:07:00.388 ******* 2026-02-20 04:31:15.422439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-20 04:31:15.422453 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:31:15.422467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-20 04:31:15.422478 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:31:15.422540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-20 04:31:15.422553 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:31:15.422564 | orchestrator | 2026-02-20 04:31:15.422576 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-20 04:31:15.422593 | orchestrator | Friday 20 February 2026 04:30:55 +0000 (0:00:02.518) 0:07:02.906 ******* 2026-02-20 04:31:15.422605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-20 04:31:15.422616 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:31:15.422644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-20 04:31:15.422678 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:31:15.422711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-20 04:31:15.422723 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:31:15.422735 | orchestrator | 2026-02-20 04:31:15.422746 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-20 04:31:15.422758 | orchestrator | Friday 20 February 2026 04:30:58 +0000 (0:00:02.726) 0:07:05.633 ******* 2026-02-20 04:31:15.422772 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:31:15.422785 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:31:15.422798 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:31:15.422811 | orchestrator | 2026-02-20 04:31:15.422823 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-20 04:31:15.422841 | orchestrator | Friday 20 February 2026 04:31:01 +0000 (0:00:02.563) 0:07:08.197 ******* 2026-02-20 04:31:15.422861 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:31:15.422880 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:31:15.422921 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:31:15.422944 | orchestrator | 2026-02-20 04:31:15.422963 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-20 04:31:15.422981 | orchestrator | Friday 20 February 2026 04:31:04 +0000 (0:00:03.569) 0:07:11.766 ******* 2026-02-20 04:31:15.422999 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:31:15.423016 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:31:15.423032 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:31:15.423050 | orchestrator | 2026-02-20 04:31:15.423069 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-20 04:31:15.423088 | orchestrator | Friday 20 February 2026 04:31:09 +0000 (0:00:04.281) 0:07:16.047 ******* 2026-02-20 04:31:15.423106 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:31:15.423123 | orchestrator | 2026-02-20 04:31:15.423141 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-20 04:31:15.423160 | orchestrator | Friday 20 February 2026 04:31:11 +0000 (0:00:02.386) 0:07:18.434 ******* 2026-02-20 04:31:15.423182 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 04:31:15.423206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 04:31:15.423251 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 04:31:15.423286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 04:31:17.612378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 04:31:17.612535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 04:31:17.612556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 04:31:17.612570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 04:31:17.612622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:31:17.612635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:31:17.612666 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-20 04:31:17.612679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 04:31:17.612691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 04:31:17.612703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 04:31:17.612728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:31:17.612741 | orchestrator | 2026-02-20 04:31:17.612754 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-20 04:31:17.612767 | orchestrator | Friday 20 February 2026 04:31:16 +0000 (0:00:05.103) 0:07:23.538 ******* 2026-02-20 04:31:17.612789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-20 04:31:18.734931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 04:31:18.735065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 04:31:18.735087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 04:31:18.735128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:31:18.735141 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:31:18.735156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-20 04:31:18.735172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 04:31:18.735245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 04:31:18.735261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 04:31:18.735282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:31:18.735293 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:31:18.735318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-20 04:31:18.735331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-20 04:31:18.735350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-20 04:31:36.367659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-20 04:31:36.367776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-20 04:31:36.367815 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:31:36.367830 | orchestrator | 2026-02-20 04:31:36.367841 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-20 04:31:36.367852 | orchestrator | Friday 20 February 2026 04:31:18 +0000 (0:00:02.124) 0:07:25.663 ******* 2026-02-20 04:31:36.367863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-20 04:31:36.367875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-20 04:31:36.367887 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:31:36.367897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-20 04:31:36.367907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-20 04:31:36.367917 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:31:36.367926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-20 04:31:36.367950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-20 04:31:36.367960 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:31:36.367970 | orchestrator | 2026-02-20 04:31:36.367980 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-20 04:31:36.367989 | orchestrator | Friday 20 February 2026 04:31:20 +0000 (0:00:02.216) 0:07:27.879 ******* 2026-02-20 04:31:36.367999 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:31:36.368010 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:31:36.368019 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:31:36.368029 | orchestrator | 2026-02-20 04:31:36.368039 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-20 04:31:36.368048 | orchestrator | Friday 20 February 2026 04:31:23 +0000 (0:00:02.396) 0:07:30.276 ******* 2026-02-20 04:31:36.368058 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:31:36.368067 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:31:36.368077 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:31:36.368086 | orchestrator | 2026-02-20 04:31:36.368101 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-20 04:31:36.368117 | orchestrator | Friday 20 February 2026 04:31:26 +0000 (0:00:03.114) 0:07:33.390 ******* 2026-02-20 04:31:36.368134 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:31:36.368151 | orchestrator | 2026-02-20 04:31:36.368167 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-20 04:31:36.368182 | orchestrator | Friday 20 February 2026 04:31:29 +0000 (0:00:02.652) 0:07:36.043 ******* 2026-02-20 04:31:36.368219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:31:36.368251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:31:36.368270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:31:36.368297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-20 04:31:36.368331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-20 04:31:40.672041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-20 04:31:40.672146 | orchestrator | 2026-02-20 04:31:40.672162 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-20 04:31:40.672173 | orchestrator | Friday 20 February 2026 04:31:36 +0000 (0:00:07.250) 0:07:43.293 ******* 2026-02-20 04:31:40.672200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:31:40.672212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-20 04:31:40.672240 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:31:40.672269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:31:40.672283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-20 04:31:40.672300 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:31:40.672323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:31:40.672333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-20 04:31:40.672349 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:31:40.672359 | orchestrator | 2026-02-20 04:31:40.672369 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-20 04:31:40.672378 | orchestrator | Friday 20 February 2026 04:31:38 +0000 (0:00:02.563) 0:07:45.856 ******* 2026-02-20 04:31:40.672388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:31:40.672405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-20 04:31:49.789418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-20 04:31:49.789611 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:31:49.789634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:31:49.789648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-20 04:31:49.789662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-20 04:31:49.789673 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:31:49.789700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:31:49.789740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-20 04:31:49.789753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-20 04:31:49.789764 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:31:49.789776 | orchestrator | 2026-02-20 04:31:49.789788 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-20 04:31:49.789820 | orchestrator | Friday 20 February 2026 04:31:40 +0000 (0:00:01.750) 0:07:47.607 ******* 2026-02-20 04:31:49.789832 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:31:49.789843 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:31:49.789854 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:31:49.789877 | orchestrator | 2026-02-20 04:31:49.789889 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-20 04:31:49.789910 | orchestrator | Friday 20 February 2026 04:31:42 +0000 (0:00:01.445) 0:07:49.053 ******* 2026-02-20 04:31:49.789921 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:31:49.789932 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:31:49.789943 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:31:49.789955 | orchestrator | 2026-02-20 04:31:49.789968 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-20 04:31:49.789980 | orchestrator | Friday 20 February 2026 04:31:44 +0000 (0:00:02.342) 0:07:51.395 ******* 2026-02-20 04:31:49.789993 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:31:49.790006 | orchestrator | 2026-02-20 04:31:49.790078 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-20 04:31:49.790094 | orchestrator | Friday 20 February 2026 04:31:47 +0000 (0:00:02.663) 0:07:54.059 ******* 2026-02-20 04:31:49.790133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-20 04:31:49.790162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 04:31:49.790178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:49.790192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:49.790222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 04:31:49.790237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-20 04:31:49.790252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 04:31:49.790275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:51.817962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:51.818087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 04:31:51.818135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-20 04:31:51.818147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 04:31:51.818156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:51.818164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:51.818186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 04:31:51.818199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:31:51.818216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-20 04:31:51.818224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:51.818232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:51.818241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 04:31:51.818255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:31:54.000928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-20 04:31:54.001033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:54.001050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:54.001064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 04:31:54.001078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:31:54.001111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-20 04:31:54.001152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:54.001165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:54.001177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 04:31:54.001190 | orchestrator | 2026-02-20 04:31:54.001203 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-20 04:31:54.001216 | orchestrator | Friday 20 February 2026 04:31:53 +0000 (0:00:05.938) 0:07:59.998 ******* 2026-02-20 04:31:54.001228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-20 04:31:54.001241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 04:31:54.001269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:54.199437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:54.199622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 04:31:54.199644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:31:54.199660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-20 04:31:54.199722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-20 04:31:54.199745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 04:31:54.199758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:54.199770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:54.199781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:54.199794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:54.199805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 04:31:54.199825 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:31:54.199839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 04:31:54.199867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:31:55.388860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-20 04:31:55.388961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:55.388975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:55.389006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 04:31:55.389017 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:31:55.389043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-20 04:31:55.389071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-20 04:31:55.389082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:55.389092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:31:55.389101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-20 04:31:55.389118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:31:55.389132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-20 04:31:55.389148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:32:07.894998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:32:07.895149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-20 04:32:07.895169 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:32:07.895186 | orchestrator | 2026-02-20 04:32:07.895199 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-20 04:32:07.895211 | orchestrator | Friday 20 February 2026 04:31:55 +0000 (0:00:02.323) 0:08:02.322 ******* 2026-02-20 04:32:07.895251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-20 04:32:07.895268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-20 04:32:07.895282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:32:07.895294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:32:07.895307 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:32:07.895318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-20 04:32:07.895345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-20 04:32:07.895357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:32:07.895388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:32:07.895400 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:32:07.895411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-20 04:32:07.895422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-20 04:32:07.895442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:32:07.895453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-20 04:32:07.895465 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:32:07.895476 | orchestrator | 2026-02-20 04:32:07.895487 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-20 04:32:07.895501 | orchestrator | Friday 20 February 2026 04:31:57 +0000 (0:00:01.851) 0:08:04.173 ******* 2026-02-20 04:32:07.895515 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:32:07.895562 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:32:07.895575 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:32:07.895594 | orchestrator | 2026-02-20 04:32:07.895613 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-20 04:32:07.895632 | orchestrator | Friday 20 February 2026 04:31:59 +0000 (0:00:02.076) 0:08:06.250 ******* 2026-02-20 04:32:07.895651 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:32:07.895670 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:32:07.895688 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:32:07.895709 | orchestrator | 2026-02-20 04:32:07.895728 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-20 04:32:07.895748 | orchestrator | Friday 20 February 2026 04:32:01 +0000 (0:00:02.288) 0:08:08.539 ******* 2026-02-20 04:32:07.895763 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:32:07.895776 | orchestrator | 2026-02-20 04:32:07.895790 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-20 04:32:07.895803 | orchestrator | Friday 20 February 2026 04:32:03 +0000 (0:00:02.286) 0:08:10.826 ******* 2026-02-20 04:32:07.895825 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 04:32:07.895852 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 04:32:26.410348 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 04:32:26.410498 | orchestrator | 2026-02-20 04:32:26.410530 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-20 04:32:26.410612 | orchestrator | Friday 20 February 2026 04:32:07 +0000 (0:00:03.990) 0:08:14.816 ******* 2026-02-20 04:32:26.410632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-20 04:32:26.410655 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:32:26.410694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-20 04:32:26.410717 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:32:26.410762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-20 04:32:26.410796 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:32:26.410808 | orchestrator | 2026-02-20 04:32:26.410820 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-20 04:32:26.410831 | orchestrator | Friday 20 February 2026 04:32:09 +0000 (0:00:01.856) 0:08:16.673 ******* 2026-02-20 04:32:26.410843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-20 04:32:26.410855 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:32:26.410868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-20 04:32:26.410880 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:32:26.410893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-20 04:32:26.410905 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:32:26.410918 | orchestrator | 2026-02-20 04:32:26.410931 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-20 04:32:26.410944 | orchestrator | Friday 20 February 2026 04:32:11 +0000 (0:00:01.767) 0:08:18.441 ******* 2026-02-20 04:32:26.410956 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:32:26.410969 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:32:26.410981 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:32:26.410994 | orchestrator | 2026-02-20 04:32:26.411007 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-20 04:32:26.411019 | orchestrator | Friday 20 February 2026 04:32:13 +0000 (0:00:02.085) 0:08:20.527 ******* 2026-02-20 04:32:26.411032 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:32:26.411046 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:32:26.411058 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:32:26.411072 | orchestrator | 2026-02-20 04:32:26.411085 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-20 04:32:26.411098 | orchestrator | Friday 20 February 2026 04:32:15 +0000 (0:00:02.311) 0:08:22.838 ******* 2026-02-20 04:32:26.411111 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:32:26.411124 | orchestrator | 2026-02-20 04:32:26.411137 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-20 04:32:26.411149 | orchestrator | Friday 20 February 2026 04:32:18 +0000 (0:00:02.309) 0:08:25.147 ******* 2026-02-20 04:32:26.411170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-20 04:32:26.411192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-20 04:32:26.411244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-20 04:32:28.256758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-20 04:32:28.256888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-20 04:32:28.256930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-20 04:32:28.256945 | orchestrator | 2026-02-20 04:32:28.256959 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-20 04:32:28.256972 | orchestrator | Friday 20 February 2026 04:32:26 +0000 (0:00:08.188) 0:08:33.336 ******* 2026-02-20 04:32:28.257007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-20 04:32:28.257020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-20 04:32:28.257033 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:32:28.257052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-20 04:32:28.257072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-20 04:32:28.257084 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:32:28.257115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-20 04:32:50.231412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-20 04:32:50.231510 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:32:50.231519 | orchestrator | 2026-02-20 04:32:50.231524 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-20 04:32:50.231529 | orchestrator | Friday 20 February 2026 04:32:28 +0000 (0:00:01.848) 0:08:35.184 ******* 2026-02-20 04:32:50.231535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-20 04:32:50.231544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-20 04:32:50.231603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-20 04:32:50.231651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-20 04:32:50.231659 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:32:50.231666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-20 04:32:50.231672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-20 04:32:50.231678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-20 04:32:50.231684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-20 04:32:50.231691 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:32:50.231698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-20 04:32:50.231705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-20 04:32:50.231723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-20 04:32:50.231728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-20 04:32:50.231739 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:32:50.231743 | orchestrator | 2026-02-20 04:32:50.231747 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-20 04:32:50.231752 | orchestrator | Friday 20 February 2026 04:32:30 +0000 (0:00:02.131) 0:08:37.316 ******* 2026-02-20 04:32:50.231756 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:32:50.231760 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:32:50.231764 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:32:50.231768 | orchestrator | 2026-02-20 04:32:50.231773 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-20 04:32:50.231776 | orchestrator | Friday 20 February 2026 04:32:32 +0000 (0:00:02.351) 0:08:39.667 ******* 2026-02-20 04:32:50.231780 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:32:50.231784 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:32:50.231788 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:32:50.231792 | orchestrator | 2026-02-20 04:32:50.231795 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-20 04:32:50.231799 | orchestrator | Friday 20 February 2026 04:32:35 +0000 (0:00:03.118) 0:08:42.786 ******* 2026-02-20 04:32:50.231803 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:32:50.231807 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:32:50.231811 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:32:50.231814 | orchestrator | 2026-02-20 04:32:50.231822 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-20 04:32:50.231826 | orchestrator | Friday 20 February 2026 04:32:37 +0000 (0:00:01.405) 0:08:44.192 ******* 2026-02-20 04:32:50.231830 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:32:50.231833 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:32:50.231837 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:32:50.231841 | orchestrator | 2026-02-20 04:32:50.231845 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-20 04:32:50.231849 | orchestrator | Friday 20 February 2026 04:32:38 +0000 (0:00:01.358) 0:08:45.550 ******* 2026-02-20 04:32:50.231852 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:32:50.231856 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:32:50.231860 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:32:50.231864 | orchestrator | 2026-02-20 04:32:50.231868 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-20 04:32:50.231871 | orchestrator | Friday 20 February 2026 04:32:40 +0000 (0:00:01.891) 0:08:47.442 ******* 2026-02-20 04:32:50.231875 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:32:50.231879 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:32:50.231883 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:32:50.231887 | orchestrator | 2026-02-20 04:32:50.231890 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-20 04:32:50.231894 | orchestrator | Friday 20 February 2026 04:32:41 +0000 (0:00:01.382) 0:08:48.825 ******* 2026-02-20 04:32:50.231899 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:32:50.231902 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:32:50.231906 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:32:50.231910 | orchestrator | 2026-02-20 04:32:50.231914 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-02-20 04:32:50.231918 | orchestrator | Friday 20 February 2026 04:32:43 +0000 (0:00:01.445) 0:08:50.270 ******* 2026-02-20 04:32:50.231922 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:32:50.231926 | orchestrator | 2026-02-20 04:32:50.231930 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-20 04:32:50.231934 | orchestrator | Friday 20 February 2026 04:32:45 +0000 (0:00:02.632) 0:08:52.903 ******* 2026-02-20 04:32:50.231939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-20 04:32:50.231950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-20 04:32:54.578789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-20 04:32:54.578911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:32:54.578927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:32:54.578939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-20 04:32:54.578953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 04:32:54.578987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 04:32:54.579017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-20 04:32:54.579031 | orchestrator | 2026-02-20 04:32:54.579044 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-20 04:32:54.579056 | orchestrator | Friday 20 February 2026 04:32:50 +0000 (0:00:04.252) 0:08:57.156 ******* 2026-02-20 04:32:54.579068 | orchestrator | changed: [testbed-node-0] => { 2026-02-20 04:32:54.579081 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:32:54.579092 | orchestrator | } 2026-02-20 04:32:54.579103 | orchestrator | changed: [testbed-node-1] => { 2026-02-20 04:32:54.579114 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:32:54.579125 | orchestrator | } 2026-02-20 04:32:54.579136 | orchestrator | changed: [testbed-node-2] => { 2026-02-20 04:32:54.579147 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:32:54.579158 | orchestrator | } 2026-02-20 04:32:54.579168 | orchestrator | 2026-02-20 04:32:54.579179 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-20 04:32:54.579190 | orchestrator | Friday 20 February 2026 04:32:51 +0000 (0:00:01.418) 0:08:58.575 ******* 2026-02-20 04:32:54.579207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-20 04:32:54.579219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 04:32:54.579238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:32:54.579250 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:32:54.579261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-20 04:32:54.579273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 04:32:54.579293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:34:58.357092 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:34:58.357210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-20 04:34:58.357247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-20 04:34:58.357261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-20 04:34:58.357297 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:34:58.357310 | orchestrator | 2026-02-20 04:34:58.357322 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-20 04:34:58.357334 | orchestrator | Friday 20 February 2026 04:32:54 +0000 (0:00:02.926) 0:09:01.502 ******* 2026-02-20 04:34:58.357345 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:34:58.357357 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:34:58.357367 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:34:58.357379 | orchestrator | 2026-02-20 04:34:58.357390 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-20 04:34:58.357401 | orchestrator | Friday 20 February 2026 04:32:56 +0000 (0:00:01.802) 0:09:03.304 ******* 2026-02-20 04:34:58.357412 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:34:58.357423 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:34:58.357434 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:34:58.357444 | orchestrator | 2026-02-20 04:34:58.357455 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-20 04:34:58.357466 | orchestrator | Friday 20 February 2026 04:32:57 +0000 (0:00:01.422) 0:09:04.726 ******* 2026-02-20 04:34:58.357478 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:34:58.357489 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:34:58.357500 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:34:58.357511 | orchestrator | 2026-02-20 04:34:58.357522 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-20 04:34:58.357533 | orchestrator | Friday 20 February 2026 04:33:04 +0000 (0:00:07.185) 0:09:11.912 ******* 2026-02-20 04:34:58.357544 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:34:58.357555 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:34:58.357566 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:34:58.357577 | orchestrator | 2026-02-20 04:34:58.357588 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-20 04:34:58.357615 | orchestrator | Friday 20 February 2026 04:33:12 +0000 (0:00:07.587) 0:09:19.499 ******* 2026-02-20 04:34:58.357628 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:34:58.357675 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:34:58.357697 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:34:58.357710 | orchestrator | 2026-02-20 04:34:58.357723 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-20 04:34:58.357736 | orchestrator | Friday 20 February 2026 04:33:19 +0000 (0:00:07.147) 0:09:26.647 ******* 2026-02-20 04:34:58.357748 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:34:58.357761 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:34:58.357773 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:34:58.357786 | orchestrator | 2026-02-20 04:34:58.357810 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-20 04:34:58.357824 | orchestrator | Friday 20 February 2026 04:33:27 +0000 (0:00:08.097) 0:09:34.745 ******* 2026-02-20 04:34:58.357836 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:34:58.357848 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:34:58.357859 | orchestrator | 2026-02-20 04:34:58.357869 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-20 04:34:58.357880 | orchestrator | Friday 20 February 2026 04:33:31 +0000 (0:00:03.737) 0:09:38.482 ******* 2026-02-20 04:34:58.357891 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:34:58.357902 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:34:58.357913 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:34:58.357923 | orchestrator | 2026-02-20 04:34:58.357952 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-20 04:34:58.357972 | orchestrator | Friday 20 February 2026 04:33:45 +0000 (0:00:14.000) 0:09:52.483 ******* 2026-02-20 04:34:58.357984 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:34:58.357995 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:34:58.358005 | orchestrator | 2026-02-20 04:34:58.358076 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-20 04:34:58.358100 | orchestrator | Friday 20 February 2026 04:33:50 +0000 (0:00:04.884) 0:09:57.368 ******* 2026-02-20 04:34:58.358118 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:34:58.358137 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:34:58.358155 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:34:58.358173 | orchestrator | 2026-02-20 04:34:58.358193 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-20 04:34:58.358211 | orchestrator | Friday 20 February 2026 04:33:58 +0000 (0:00:07.713) 0:10:05.082 ******* 2026-02-20 04:34:58.358231 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:34:58.358250 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:34:58.358268 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:34:58.358284 | orchestrator | 2026-02-20 04:34:58.358296 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-20 04:34:58.358307 | orchestrator | Friday 20 February 2026 04:34:04 +0000 (0:00:06.818) 0:10:11.900 ******* 2026-02-20 04:34:58.358325 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:34:58.358337 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:34:58.358348 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:34:58.358358 | orchestrator | 2026-02-20 04:34:58.358370 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-20 04:34:58.358380 | orchestrator | Friday 20 February 2026 04:34:11 +0000 (0:00:06.851) 0:10:18.752 ******* 2026-02-20 04:34:58.358391 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:34:58.358402 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:34:58.358413 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:34:58.358424 | orchestrator | 2026-02-20 04:34:58.358435 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-20 04:34:58.358446 | orchestrator | Friday 20 February 2026 04:34:18 +0000 (0:00:06.904) 0:10:25.656 ******* 2026-02-20 04:34:58.358457 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:34:58.358468 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:34:58.358479 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:34:58.358489 | orchestrator | 2026-02-20 04:34:58.358500 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-02-20 04:34:58.358511 | orchestrator | Friday 20 February 2026 04:34:26 +0000 (0:00:07.383) 0:10:33.039 ******* 2026-02-20 04:34:58.358522 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:34:58.358533 | orchestrator | 2026-02-20 04:34:58.358561 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-20 04:34:58.358572 | orchestrator | Friday 20 February 2026 04:34:29 +0000 (0:00:03.608) 0:10:36.648 ******* 2026-02-20 04:34:58.358583 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:34:58.358594 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:34:58.358606 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:34:58.358617 | orchestrator | 2026-02-20 04:34:58.358628 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-02-20 04:34:58.358663 | orchestrator | Friday 20 February 2026 04:34:42 +0000 (0:00:13.007) 0:10:49.655 ******* 2026-02-20 04:34:58.358675 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:34:58.358687 | orchestrator | 2026-02-20 04:34:58.358698 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-20 04:34:58.358709 | orchestrator | Friday 20 February 2026 04:34:46 +0000 (0:00:03.664) 0:10:53.320 ******* 2026-02-20 04:34:58.358720 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:34:58.358737 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:34:58.358757 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:34:58.358776 | orchestrator | 2026-02-20 04:34:58.358807 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-20 04:34:58.358828 | orchestrator | Friday 20 February 2026 04:34:53 +0000 (0:00:06.870) 0:11:00.191 ******* 2026-02-20 04:34:58.358846 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:34:58.358863 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:34:58.358874 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:34:58.358885 | orchestrator | 2026-02-20 04:34:58.358896 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-20 04:34:58.358907 | orchestrator | Friday 20 February 2026 04:34:55 +0000 (0:00:02.155) 0:11:02.349 ******* 2026-02-20 04:34:58.358918 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:34:58.358929 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:34:58.358940 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:34:58.358951 | orchestrator | 2026-02-20 04:34:58.358962 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:34:58.358974 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-20 04:34:58.358987 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-20 04:34:58.358998 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-20 04:34:58.359009 | orchestrator | 2026-02-20 04:34:58.359020 | orchestrator | 2026-02-20 04:34:58.359031 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 04:34:58.359042 | orchestrator | Friday 20 February 2026 04:34:58 +0000 (0:00:02.928) 0:11:05.277 ******* 2026-02-20 04:34:58.359053 | orchestrator | =============================================================================== 2026-02-20 04:34:58.359064 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 14.00s 2026-02-20 04:34:58.359075 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 13.01s 2026-02-20 04:34:58.359086 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.19s 2026-02-20 04:34:58.359108 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.10s 2026-02-20 04:34:59.284964 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.71s 2026-02-20 04:34:59.285066 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.59s 2026-02-20 04:34:59.285080 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.38s 2026-02-20 04:34:59.285091 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.25s 2026-02-20 04:34:59.285102 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.19s 2026-02-20 04:34:59.285113 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.15s 2026-02-20 04:34:59.285124 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 7.10s 2026-02-20 04:34:59.285135 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.90s 2026-02-20 04:34:59.285147 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 6.87s 2026-02-20 04:34:59.285166 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.85s 2026-02-20 04:34:59.285213 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.82s 2026-02-20 04:34:59.285240 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.08s 2026-02-20 04:34:59.285258 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.94s 2026-02-20 04:34:59.285276 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.54s 2026-02-20 04:34:59.285293 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.52s 2026-02-20 04:34:59.285333 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 5.10s 2026-02-20 04:34:59.607902 | orchestrator | + osism apply -a upgrade opensearch 2026-02-20 04:35:01.728412 | orchestrator | 2026-02-20 04:35:01 | INFO  | Task 1336bab0-9c2a-4a27-b58b-2562820d128e (opensearch) was prepared for execution. 2026-02-20 04:35:01.728508 | orchestrator | 2026-02-20 04:35:01 | INFO  | It takes a moment until task 1336bab0-9c2a-4a27-b58b-2562820d128e (opensearch) has been started and output is visible here. 2026-02-20 04:35:20.689919 | orchestrator | 2026-02-20 04:35:20.690141 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 04:35:20.690182 | orchestrator | 2026-02-20 04:35:20.690203 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 04:35:20.690216 | orchestrator | Friday 20 February 2026 04:35:07 +0000 (0:00:01.639) 0:00:01.639 ******* 2026-02-20 04:35:20.690227 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:35:20.690239 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:35:20.690250 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:35:20.690261 | orchestrator | 2026-02-20 04:35:20.690273 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 04:35:20.690284 | orchestrator | Friday 20 February 2026 04:35:08 +0000 (0:00:01.680) 0:00:03.319 ******* 2026-02-20 04:35:20.690296 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-20 04:35:20.690307 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-20 04:35:20.690318 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-20 04:35:20.690329 | orchestrator | 2026-02-20 04:35:20.690340 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-20 04:35:20.690351 | orchestrator | 2026-02-20 04:35:20.690362 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-20 04:35:20.690373 | orchestrator | Friday 20 February 2026 04:35:12 +0000 (0:00:03.507) 0:00:06.827 ******* 2026-02-20 04:35:20.690385 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:35:20.690400 | orchestrator | 2026-02-20 04:35:20.690419 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-20 04:35:20.690440 | orchestrator | Friday 20 February 2026 04:35:14 +0000 (0:00:01.857) 0:00:08.684 ******* 2026-02-20 04:35:20.690459 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-20 04:35:20.690479 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-20 04:35:20.690498 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-20 04:35:20.690517 | orchestrator | 2026-02-20 04:35:20.690538 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-20 04:35:20.690559 | orchestrator | Friday 20 February 2026 04:35:16 +0000 (0:00:02.414) 0:00:11.098 ******* 2026-02-20 04:35:20.690585 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:35:20.690633 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:35:20.690797 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:35:20.690827 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-20 04:35:20.690850 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-20 04:35:20.690881 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-20 04:35:20.690914 | orchestrator | 2026-02-20 04:35:20.690933 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-20 04:35:20.690951 | orchestrator | Friday 20 February 2026 04:35:18 +0000 (0:00:02.430) 0:00:13.529 ******* 2026-02-20 04:35:20.690970 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:35:20.690988 | orchestrator | 2026-02-20 04:35:20.691017 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-20 04:35:26.162270 | orchestrator | Friday 20 February 2026 04:35:20 +0000 (0:00:01.683) 0:00:15.212 ******* 2026-02-20 04:35:26.162365 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:35:26.162379 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:35:26.162386 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:35:26.162421 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-20 04:35:26.162444 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-20 04:35:26.162452 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-20 04:35:26.162464 | orchestrator | 2026-02-20 04:35:26.162472 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-20 04:35:26.162479 | orchestrator | Friday 20 February 2026 04:35:24 +0000 (0:00:03.602) 0:00:18.814 ******* 2026-02-20 04:35:26.162489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:35:26.162504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-20 04:35:28.066443 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:35:28.066585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:35:28.066597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-20 04:35:28.066619 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:35:28.066634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:35:28.066651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-20 04:35:28.066674 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:35:28.066681 | orchestrator | 2026-02-20 04:35:28.066689 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-20 04:35:28.066698 | orchestrator | Friday 20 February 2026 04:35:26 +0000 (0:00:01.873) 0:00:20.688 ******* 2026-02-20 04:35:28.066702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:35:28.066707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-20 04:35:28.066715 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:35:28.066722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:35:28.066731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:35:32.014346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-20 04:35:32.014492 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:35:32.014510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-20 04:35:32.014521 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:35:32.014531 | orchestrator | 2026-02-20 04:35:32.014541 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-20 04:35:32.014564 | orchestrator | Friday 20 February 2026 04:35:28 +0000 (0:00:01.906) 0:00:22.594 ******* 2026-02-20 04:35:32.014575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:35:32.014602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:35:32.014612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:35:32.014630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-20 04:35:32.014645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-20 04:35:32.014714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-20 04:35:46.051718 | orchestrator | 2026-02-20 04:35:46.051808 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-20 04:35:46.051817 | orchestrator | Friday 20 February 2026 04:35:32 +0000 (0:00:03.947) 0:00:26.542 ******* 2026-02-20 04:35:46.051821 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:35:46.051827 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:35:46.051831 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:35:46.051835 | orchestrator | 2026-02-20 04:35:46.051839 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-20 04:35:46.051843 | orchestrator | Friday 20 February 2026 04:35:35 +0000 (0:00:03.563) 0:00:30.105 ******* 2026-02-20 04:35:46.051847 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:35:46.051851 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:35:46.051855 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:35:46.051859 | orchestrator | 2026-02-20 04:35:46.051863 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-02-20 04:35:46.051867 | orchestrator | Friday 20 February 2026 04:35:38 +0000 (0:00:03.230) 0:00:33.335 ******* 2026-02-20 04:35:46.051872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:35:46.051898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:35:46.051903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-20 04:35:46.051918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-20 04:35:46.051941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-20 04:35:46.051954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-20 04:35:46.051961 | orchestrator | 2026-02-20 04:35:46.051967 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-02-20 04:35:46.051976 | orchestrator | Friday 20 February 2026 04:35:42 +0000 (0:00:03.700) 0:00:37.036 ******* 2026-02-20 04:35:46.051985 | orchestrator | changed: [testbed-node-0] => { 2026-02-20 04:35:46.051993 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:35:46.051999 | orchestrator | } 2026-02-20 04:35:46.052006 | orchestrator | changed: [testbed-node-1] => { 2026-02-20 04:35:46.052017 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:35:46.052023 | orchestrator | } 2026-02-20 04:35:46.052029 | orchestrator | changed: [testbed-node-2] => { 2026-02-20 04:35:46.052035 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:35:46.052040 | orchestrator | } 2026-02-20 04:35:46.052047 | orchestrator | 2026-02-20 04:35:46.052052 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-20 04:35:46.052059 | orchestrator | Friday 20 February 2026 04:35:43 +0000 (0:00:01.476) 0:00:38.513 ******* 2026-02-20 04:35:46.052072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:39:05.638379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-20 04:39:05.638489 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:39:05.638520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:39:05.638532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-20 04:39:05.638562 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:39:05.638589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-20 04:39:05.638600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-20 04:39:05.638610 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:39:05.638619 | orchestrator | 2026-02-20 04:39:05.638629 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-20 04:39:05.638640 | orchestrator | Friday 20 February 2026 04:35:46 +0000 (0:00:02.062) 0:00:40.575 ******* 2026-02-20 04:39:05.638653 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:39:05.638663 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:39:05.638672 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:39:05.638680 | orchestrator | 2026-02-20 04:39:05.638689 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-20 04:39:05.638698 | orchestrator | Friday 20 February 2026 04:35:47 +0000 (0:00:01.562) 0:00:42.138 ******* 2026-02-20 04:39:05.638707 | orchestrator | 2026-02-20 04:39:05.638716 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-20 04:39:05.638724 | orchestrator | Friday 20 February 2026 04:35:48 +0000 (0:00:00.457) 0:00:42.595 ******* 2026-02-20 04:39:05.638743 | orchestrator | 2026-02-20 04:39:05.638785 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-20 04:39:05.638797 | orchestrator | Friday 20 February 2026 04:35:48 +0000 (0:00:00.445) 0:00:43.041 ******* 2026-02-20 04:39:05.638805 | orchestrator | 2026-02-20 04:39:05.638814 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-20 04:39:05.638823 | orchestrator | Friday 20 February 2026 04:35:49 +0000 (0:00:00.775) 0:00:43.816 ******* 2026-02-20 04:39:05.638833 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:39:05.638843 | orchestrator | 2026-02-20 04:39:05.638851 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-20 04:39:05.638860 | orchestrator | Friday 20 February 2026 04:35:52 +0000 (0:00:03.606) 0:00:47.423 ******* 2026-02-20 04:39:05.638869 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:39:05.638877 | orchestrator | 2026-02-20 04:39:05.638888 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-20 04:39:05.638898 | orchestrator | Friday 20 February 2026 04:35:59 +0000 (0:00:06.435) 0:00:53.858 ******* 2026-02-20 04:39:05.638908 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:39:05.638918 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:39:05.638928 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:39:05.638939 | orchestrator | 2026-02-20 04:39:05.638949 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-20 04:39:05.638959 | orchestrator | Friday 20 February 2026 04:37:13 +0000 (0:01:13.933) 0:02:07.791 ******* 2026-02-20 04:39:05.638969 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:39:05.638979 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:39:05.638993 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:39:05.639008 | orchestrator | 2026-02-20 04:39:05.639023 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-20 04:39:05.639038 | orchestrator | Friday 20 February 2026 04:38:55 +0000 (0:01:42.468) 0:03:50.260 ******* 2026-02-20 04:39:05.639053 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:39:05.639067 | orchestrator | 2026-02-20 04:39:05.639083 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-20 04:39:05.639098 | orchestrator | Friday 20 February 2026 04:38:57 +0000 (0:00:01.678) 0:03:51.939 ******* 2026-02-20 04:39:05.639112 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:39:05.639129 | orchestrator | 2026-02-20 04:39:05.639145 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-20 04:39:05.639161 | orchestrator | Friday 20 February 2026 04:39:00 +0000 (0:00:03.588) 0:03:55.528 ******* 2026-02-20 04:39:05.639174 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:39:05.639185 | orchestrator | 2026-02-20 04:39:05.639196 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-20 04:39:05.639207 | orchestrator | Friday 20 February 2026 04:39:04 +0000 (0:00:03.385) 0:03:58.913 ******* 2026-02-20 04:39:05.639217 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:39:05.639228 | orchestrator | 2026-02-20 04:39:05.639241 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-20 04:39:05.639265 | orchestrator | Friday 20 February 2026 04:39:05 +0000 (0:00:01.243) 0:04:00.156 ******* 2026-02-20 04:39:07.967490 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:39:07.967619 | orchestrator | 2026-02-20 04:39:07.967644 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:39:07.967665 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 04:39:07.967685 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-20 04:39:07.967703 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-20 04:39:07.967819 | orchestrator | 2026-02-20 04:39:07.967841 | orchestrator | 2026-02-20 04:39:07.967858 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 04:39:07.967873 | orchestrator | Friday 20 February 2026 04:39:07 +0000 (0:00:01.945) 0:04:02.102 ******* 2026-02-20 04:39:07.967889 | orchestrator | =============================================================================== 2026-02-20 04:39:07.967903 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------ 102.47s 2026-02-20 04:39:07.967918 | orchestrator | opensearch : Restart opensearch container ------------------------------ 73.93s 2026-02-20 04:39:07.967933 | orchestrator | opensearch : Perform a flush -------------------------------------------- 6.43s 2026-02-20 04:39:07.967947 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.95s 2026-02-20 04:39:07.967962 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.70s 2026-02-20 04:39:07.967978 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.61s 2026-02-20 04:39:07.967992 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.60s 2026-02-20 04:39:07.968005 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.59s 2026-02-20 04:39:07.968038 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.56s 2026-02-20 04:39:07.968053 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.51s 2026-02-20 04:39:07.968066 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.39s 2026-02-20 04:39:07.968078 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.23s 2026-02-20 04:39:07.968091 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.43s 2026-02-20 04:39:07.968104 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.41s 2026-02-20 04:39:07.968116 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.06s 2026-02-20 04:39:07.968130 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.94s 2026-02-20 04:39:07.968143 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.91s 2026-02-20 04:39:07.968156 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.87s 2026-02-20 04:39:07.968169 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.86s 2026-02-20 04:39:07.968183 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.68s 2026-02-20 04:39:08.321959 | orchestrator | + osism apply -a upgrade memcached 2026-02-20 04:39:10.393341 | orchestrator | 2026-02-20 04:39:10 | INFO  | Task 5124a657-98e3-4c36-b32c-55b2c48b6a04 (memcached) was prepared for execution. 2026-02-20 04:39:10.393432 | orchestrator | 2026-02-20 04:39:10 | INFO  | It takes a moment until task 5124a657-98e3-4c36-b32c-55b2c48b6a04 (memcached) has been started and output is visible here. 2026-02-20 04:39:32.814800 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-20 04:39:32.814915 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-20 04:39:32.814960 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-20 04:39:32.814983 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-20 04:39:32.815006 | orchestrator | 2026-02-20 04:39:32.815019 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 04:39:32.815030 | orchestrator | 2026-02-20 04:39:32.815041 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 04:39:32.815053 | orchestrator | Friday 20 February 2026 04:39:15 +0000 (0:00:00.896) 0:00:00.896 ******* 2026-02-20 04:39:32.815089 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:39:32.815102 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:39:32.815114 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:39:32.815125 | orchestrator | 2026-02-20 04:39:32.815137 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 04:39:32.815147 | orchestrator | Friday 20 February 2026 04:39:15 +0000 (0:00:00.773) 0:00:01.670 ******* 2026-02-20 04:39:32.815158 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-20 04:39:32.815170 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-20 04:39:32.815181 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-20 04:39:32.815191 | orchestrator | 2026-02-20 04:39:32.815202 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-20 04:39:32.815213 | orchestrator | 2026-02-20 04:39:32.815224 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-20 04:39:32.815248 | orchestrator | Friday 20 February 2026 04:39:16 +0000 (0:00:00.695) 0:00:02.366 ******* 2026-02-20 04:39:32.815269 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:39:32.815281 | orchestrator | 2026-02-20 04:39:32.815292 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-20 04:39:32.815303 | orchestrator | Friday 20 February 2026 04:39:17 +0000 (0:00:00.994) 0:00:03.360 ******* 2026-02-20 04:39:32.815314 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-20 04:39:32.815381 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-20 04:39:32.815395 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-20 04:39:32.815406 | orchestrator | 2026-02-20 04:39:32.815417 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-20 04:39:32.815428 | orchestrator | Friday 20 February 2026 04:39:18 +0000 (0:00:00.776) 0:00:04.136 ******* 2026-02-20 04:39:32.815439 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-20 04:39:32.815450 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-20 04:39:32.815461 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-20 04:39:32.815472 | orchestrator | 2026-02-20 04:39:32.815483 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-02-20 04:39:32.815494 | orchestrator | Friday 20 February 2026 04:39:20 +0000 (0:00:01.674) 0:00:05.811 ******* 2026-02-20 04:39:32.815524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-20 04:39:32.815540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-20 04:39:32.815582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-20 04:39:32.815596 | orchestrator | 2026-02-20 04:39:32.815607 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-02-20 04:39:32.815618 | orchestrator | Friday 20 February 2026 04:39:21 +0000 (0:00:01.231) 0:00:07.043 ******* 2026-02-20 04:39:32.815629 | orchestrator | changed: [testbed-node-0] => { 2026-02-20 04:39:32.815640 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:39:32.815652 | orchestrator | } 2026-02-20 04:39:32.815663 | orchestrator | changed: [testbed-node-1] => { 2026-02-20 04:39:32.815674 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:39:32.815685 | orchestrator | } 2026-02-20 04:39:32.815695 | orchestrator | changed: [testbed-node-2] => { 2026-02-20 04:39:32.815706 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:39:32.815717 | orchestrator | } 2026-02-20 04:39:32.815728 | orchestrator | 2026-02-20 04:39:32.815739 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-20 04:39:32.815750 | orchestrator | Friday 20 February 2026 04:39:21 +0000 (0:00:00.310) 0:00:07.354 ******* 2026-02-20 04:39:32.815807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-20 04:39:32.815821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-20 04:39:32.815832 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-20 04:39:32.815849 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-20 04:39:32.815910 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:39:32.815921 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:39:32.815933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-20 04:39:32.815953 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:39:32.815964 | orchestrator | 2026-02-20 04:39:32.815975 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-20 04:39:32.815986 | orchestrator | Friday 20 February 2026 04:39:22 +0000 (0:00:01.088) 0:00:08.443 ******* 2026-02-20 04:39:32.815997 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:39:32.816008 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:39:32.816027 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:39:33.138100 | orchestrator | 2026-02-20 04:39:33.138229 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:39:33.138258 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 04:39:33.138278 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 04:39:33.138296 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 04:39:33.138314 | orchestrator | 2026-02-20 04:39:33.138386 | orchestrator | 2026-02-20 04:39:33.138405 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 04:39:33.138424 | orchestrator | Friday 20 February 2026 04:39:32 +0000 (0:00:10.120) 0:00:18.563 ******* 2026-02-20 04:39:33.138443 | orchestrator | =============================================================================== 2026-02-20 04:39:33.138461 | orchestrator | memcached : Restart memcached container -------------------------------- 10.12s 2026-02-20 04:39:33.138481 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.67s 2026-02-20 04:39:33.138499 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.23s 2026-02-20 04:39:33.138518 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.09s 2026-02-20 04:39:33.138537 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.99s 2026-02-20 04:39:33.138557 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.78s 2026-02-20 04:39:33.138578 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.78s 2026-02-20 04:39:33.138599 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2026-02-20 04:39:33.138618 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.31s 2026-02-20 04:39:33.448661 | orchestrator | + osism apply -a upgrade redis 2026-02-20 04:39:35.460739 | orchestrator | 2026-02-20 04:39:35 | INFO  | Task 4c9b07ee-f57e-4c2d-aa1c-49fbd02deaea (redis) was prepared for execution. 2026-02-20 04:39:35.460923 | orchestrator | 2026-02-20 04:39:35 | INFO  | It takes a moment until task 4c9b07ee-f57e-4c2d-aa1c-49fbd02deaea (redis) has been started and output is visible here. 2026-02-20 04:39:51.821186 | orchestrator | 2026-02-20 04:39:51.821327 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 04:39:51.821356 | orchestrator | 2026-02-20 04:39:51.821373 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 04:39:51.821390 | orchestrator | Friday 20 February 2026 04:39:40 +0000 (0:00:01.496) 0:00:01.496 ******* 2026-02-20 04:39:51.821406 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:39:51.821456 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:39:51.821473 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:39:51.821488 | orchestrator | 2026-02-20 04:39:51.821504 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 04:39:51.821520 | orchestrator | Friday 20 February 2026 04:39:42 +0000 (0:00:01.637) 0:00:03.133 ******* 2026-02-20 04:39:51.821535 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-20 04:39:51.821553 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-20 04:39:51.821568 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-20 04:39:51.821584 | orchestrator | 2026-02-20 04:39:51.821601 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-20 04:39:51.821618 | orchestrator | 2026-02-20 04:39:51.821636 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-20 04:39:51.821664 | orchestrator | Friday 20 February 2026 04:39:44 +0000 (0:00:01.610) 0:00:04.744 ******* 2026-02-20 04:39:51.821674 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:39:51.821685 | orchestrator | 2026-02-20 04:39:51.821695 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-20 04:39:51.821706 | orchestrator | Friday 20 February 2026 04:39:46 +0000 (0:00:02.528) 0:00:07.273 ******* 2026-02-20 04:39:51.821721 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 04:39:51.821740 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 04:39:51.821752 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 04:39:51.821827 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 04:39:51.821877 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 04:39:51.821909 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 04:39:51.821922 | orchestrator | 2026-02-20 04:39:51.821934 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-20 04:39:51.821946 | orchestrator | Friday 20 February 2026 04:39:48 +0000 (0:00:02.054) 0:00:09.328 ******* 2026-02-20 04:39:51.821957 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 04:39:51.821970 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 04:39:51.821982 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 04:39:51.821994 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 04:39:51.822100 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 04:39:59.081429 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 04:39:59.081514 | orchestrator | 2026-02-20 04:39:59.081538 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-20 04:39:59.081546 | orchestrator | Friday 20 February 2026 04:39:51 +0000 (0:00:03.150) 0:00:12.479 ******* 2026-02-20 04:39:59.081555 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 04:39:59.081564 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 04:39:59.081573 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 04:39:59.081579 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 04:39:59.081608 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 04:39:59.081636 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 04:39:59.081647 | orchestrator | 2026-02-20 04:39:59.081657 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-02-20 04:39:59.081669 | orchestrator | Friday 20 February 2026 04:39:55 +0000 (0:00:04.025) 0:00:16.504 ******* 2026-02-20 04:39:59.081680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 04:39:59.081692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 04:39:59.081699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-20 04:39:59.081707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 04:39:59.081750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 04:39:59.081764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-20 04:40:27.538912 | orchestrator | 2026-02-20 04:40:27.539034 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-02-20 04:40:27.539052 | orchestrator | Friday 20 February 2026 04:39:59 +0000 (0:00:03.226) 0:00:19.731 ******* 2026-02-20 04:40:27.539065 | orchestrator | changed: [testbed-node-0] => { 2026-02-20 04:40:27.539096 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:40:27.539108 | orchestrator | } 2026-02-20 04:40:27.539119 | orchestrator | changed: [testbed-node-1] => { 2026-02-20 04:40:27.539131 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:40:27.539142 | orchestrator | } 2026-02-20 04:40:27.539153 | orchestrator | changed: [testbed-node-2] => { 2026-02-20 04:40:27.539164 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:40:27.539202 | orchestrator | } 2026-02-20 04:40:27.539213 | orchestrator | 2026-02-20 04:40:27.539225 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-20 04:40:27.539236 | orchestrator | Friday 20 February 2026 04:40:00 +0000 (0:00:01.656) 0:00:21.388 ******* 2026-02-20 04:40:27.539250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-20 04:40:27.539267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-20 04:40:27.539303 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:40:27.539316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-20 04:40:27.539329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-20 04:40:27.539343 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:40:27.539356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-20 04:40:27.539395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-20 04:40:27.539409 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:40:27.539422 | orchestrator | 2026-02-20 04:40:27.539435 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-20 04:40:27.539448 | orchestrator | Friday 20 February 2026 04:40:02 +0000 (0:00:01.845) 0:00:23.234 ******* 2026-02-20 04:40:27.539461 | orchestrator | 2026-02-20 04:40:27.539474 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-20 04:40:27.539486 | orchestrator | Friday 20 February 2026 04:40:03 +0000 (0:00:00.441) 0:00:23.675 ******* 2026-02-20 04:40:27.539497 | orchestrator | 2026-02-20 04:40:27.539508 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-20 04:40:27.539519 | orchestrator | Friday 20 February 2026 04:40:03 +0000 (0:00:00.421) 0:00:24.096 ******* 2026-02-20 04:40:27.539530 | orchestrator | 2026-02-20 04:40:27.539541 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-20 04:40:27.539552 | orchestrator | Friday 20 February 2026 04:40:04 +0000 (0:00:00.761) 0:00:24.858 ******* 2026-02-20 04:40:27.539571 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:40:27.539582 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:40:27.539593 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:40:27.539604 | orchestrator | 2026-02-20 04:40:27.539615 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-20 04:40:27.539626 | orchestrator | Friday 20 February 2026 04:40:15 +0000 (0:00:11.325) 0:00:36.184 ******* 2026-02-20 04:40:27.539637 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:40:27.539648 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:40:27.539659 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:40:27.539670 | orchestrator | 2026-02-20 04:40:27.539681 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:40:27.539693 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 04:40:27.539706 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 04:40:27.539717 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 04:40:27.539728 | orchestrator | 2026-02-20 04:40:27.539739 | orchestrator | 2026-02-20 04:40:27.539750 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 04:40:27.539760 | orchestrator | Friday 20 February 2026 04:40:27 +0000 (0:00:11.582) 0:00:47.767 ******* 2026-02-20 04:40:27.539772 | orchestrator | =============================================================================== 2026-02-20 04:40:27.539814 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.58s 2026-02-20 04:40:27.539826 | orchestrator | redis : Restart redis container ---------------------------------------- 11.33s 2026-02-20 04:40:27.539837 | orchestrator | redis : Copying over redis config files --------------------------------- 4.03s 2026-02-20 04:40:27.539848 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.23s 2026-02-20 04:40:27.539859 | orchestrator | redis : Copying over default config.json files -------------------------- 3.15s 2026-02-20 04:40:27.539869 | orchestrator | redis : include_tasks --------------------------------------------------- 2.53s 2026-02-20 04:40:27.539880 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.05s 2026-02-20 04:40:27.539891 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.85s 2026-02-20 04:40:27.539902 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.66s 2026-02-20 04:40:27.539913 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.64s 2026-02-20 04:40:27.539923 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.62s 2026-02-20 04:40:27.539934 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.61s 2026-02-20 04:40:27.839739 | orchestrator | + osism apply -a upgrade mariadb 2026-02-20 04:40:29.759745 | orchestrator | 2026-02-20 04:40:29 | INFO  | Task 54387c2e-efb3-40f7-864b-3be82dc63417 (mariadb) was prepared for execution. 2026-02-20 04:40:29.759941 | orchestrator | 2026-02-20 04:40:29 | INFO  | It takes a moment until task 54387c2e-efb3-40f7-864b-3be82dc63417 (mariadb) has been started and output is visible here. 2026-02-20 04:40:44.207323 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-02-20 04:40:44.207434 | orchestrator | -vvvv to see details 2026-02-20 04:40:44.207447 | orchestrator | 2026-02-20 04:40:44.207456 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 04:40:44.207465 | orchestrator | 2026-02-20 04:40:44.207473 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 04:40:44.207480 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:40:44.207488 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:40:44.207495 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:40:44.207532 | orchestrator | 2026-02-20 04:40:44.207550 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 04:40:44.207559 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-20 04:40:44.207568 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-20 04:40:44.207587 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-20 04:40:44.207594 | orchestrator | 2026-02-20 04:40:44.207602 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-20 04:40:44.207609 | orchestrator | 2026-02-20 04:40:44.207616 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-20 04:40:44.207624 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 04:40:44.207631 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-20 04:40:44.207638 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-20 04:40:44.207646 | orchestrator | 2026-02-20 04:40:44.207653 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-20 04:40:44.207660 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:40:44.207669 | orchestrator | 2026-02-20 04:40:44.207677 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-20 04:40:44.207689 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"msg": "{'mariadb': {'container_name': 'mariadb', 'group': '{{ mariadb_shard_group }}', 'enabled': True, 'image': '{{ mariadb_image_full }}', 'volumes': '{{ mariadb_default_volumes + mariadb_extra_volumes }}', 'dimensions': '{{ mariadb_dimensions }}', 'healthcheck': '{{ mariadb_healthcheck }}', 'environment': {'MYSQL_USERNAME': '{{ mariadb_monitor_user }}', 'MYSQL_PASSWORD': '{% if enable_proxysql | bool %}{{ mariadb_monitor_password }}{% endif %}', 'MYSQL_HOST': '{{ api_interface_address }}', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': '{{ enable_mariadb | bool and not enable_external_mariadb_load_balancer | bool }}', 'mode': 'tcp', 'port': '{{ database_port }}', 'listen_port': '{{ mariadb_port }}', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', '{% if enable_mariadb_clustercheck | bool %}option httpchk{% endif %}'], 'custom_member_list': \"{{ internal_haproxy_members.split(';') }}\"}, 'mariadb_external_lb': {'enabled': '{{ enable_external_mariadb_load_balancer | bool }}', 'mode': 'tcp', 'port': '{{ database_port }}', 'listen_port': '{{ mariadb_port }}', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': \"{{ external_haproxy_members.split(';') }}\"}}}, 'mariadb-clustercheck': {'container_name': 'mariadb_clustercheck', 'group': '{{ mariadb_shard_group }}', 'enabled': '{{ enable_mariadb_clustercheck | bool }}', 'image': '{{ mariadb_clustercheck_image_full }}', 'volumes': '{{ mariadb_clustercheck_default_volumes + mariadb_clustercheck_extra_volumes }}', 'dimensions': '{{ mariadb_clustercheck_dimensions }}', 'environment': {'MYSQL_USERNAME': '{{ mariadb_monitor_user }}', 'MYSQL_PASSWORD': '{% if enable_proxysql | bool %}{{ mariadb_monitor_password }}{% endif %}', 'MYSQL_HOST': '{{ api_interface_address }}', 'AVAILABLE_WHEN_DONOR': '1'}}}: ['{{ node_config_directory }}/mariadb/:{{ container_config_directory }}/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', \"{{ '/etc/timezone:/etc/timezone:ro' if ansible_facts.os_family == 'Debian' else '' }}\", '{{ mariadb_datadir_volume }}:/var/lib/mysql', 'kolla_logs:/var/log/kolla/']: 'dict object' has no attribute 'os_family'"} 2026-02-20 04:40:44.207721 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"msg": "{'mariadb': {'container_name': 'mariadb', 'group': '{{ mariadb_shard_group }}', 'enabled': True, 'image': '{{ mariadb_image_full }}', 'volumes': '{{ mariadb_default_volumes + mariadb_extra_volumes }}', 'dimensions': '{{ mariadb_dimensions }}', 'healthcheck': '{{ mariadb_healthcheck }}', 'environment': {'MYSQL_USERNAME': '{{ mariadb_monitor_user }}', 'MYSQL_PASSWORD': '{% if enable_proxysql | bool %}{{ mariadb_monitor_password }}{% endif %}', 'MYSQL_HOST': '{{ api_interface_address }}', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': '{{ enable_mariadb | bool and not enable_external_mariadb_load_balancer | bool }}', 'mode': 'tcp', 'port': '{{ database_port }}', 'listen_port': '{{ mariadb_port }}', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', '{% if enable_mariadb_clustercheck | bool %}option httpchk{% endif %}'], 'custom_member_list': \"{{ internal_haproxy_members.split(';') }}\"}, 'mariadb_external_lb': {'enabled': '{{ enable_external_mariadb_load_balancer | bool }}', 'mode': 'tcp', 'port': '{{ database_port }}', 'listen_port': '{{ mariadb_port }}', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': \"{{ external_haproxy_members.split(';') }}\"}}}, 'mariadb-clustercheck': {'container_name': 'mariadb_clustercheck', 'group': '{{ mariadb_shard_group }}', 'enabled': '{{ enable_mariadb_clustercheck | bool }}', 'image': '{{ mariadb_clustercheck_image_full }}', 'volumes': '{{ mariadb_clustercheck_default_volumes + mariadb_clustercheck_extra_volumes }}', 'dimensions': '{{ mariadb_clustercheck_dimensions }}', 'environment': {'MYSQL_USERNAME': '{{ mariadb_monitor_user }}', 'MYSQL_PASSWORD': '{% if enable_proxysql | bool %}{{ mariadb_monitor_password }}{% endif %}', 'MYSQL_HOST': '{{ api_interface_address }}', 'AVAILABLE_WHEN_DONOR': '1'}}}: ['{{ node_config_directory }}/mariadb/:{{ container_config_directory }}/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', \"{{ '/etc/timezone:/etc/timezone:ro' if ansible_facts.os_family == 'Debian' else '' }}\", '{{ mariadb_datadir_volume }}:/var/lib/mysql', 'kolla_logs:/var/log/kolla/']: 'dict object' has no attribute 'os_family'"} 2026-02-20 04:40:44.207744 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"msg": "{'mariadb': {'container_name': 'mariadb', 'group': '{{ mariadb_shard_group }}', 'enabled': True, 'image': '{{ mariadb_image_full }}', 'volumes': '{{ mariadb_default_volumes + mariadb_extra_volumes }}', 'dimensions': '{{ mariadb_dimensions }}', 'healthcheck': '{{ mariadb_healthcheck }}', 'environment': {'MYSQL_USERNAME': '{{ mariadb_monitor_user }}', 'MYSQL_PASSWORD': '{% if enable_proxysql | bool %}{{ mariadb_monitor_password }}{% endif %}', 'MYSQL_HOST': '{{ api_interface_address }}', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': '{{ enable_mariadb | bool and not enable_external_mariadb_load_balancer | bool }}', 'mode': 'tcp', 'port': '{{ database_port }}', 'listen_port': '{{ mariadb_port }}', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', '{% if enable_mariadb_clustercheck | bool %}option httpchk{% endif %}'], 'custom_member_list': \"{{ internal_haproxy_members.split(';') }}\"}, 'mariadb_external_lb': {'enabled': '{{ enable_external_mariadb_load_balancer | bool }}', 'mode': 'tcp', 'port': '{{ database_port }}', 'listen_port': '{{ mariadb_port }}', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': \"{{ external_haproxy_members.split(';') }}\"}}}, 'mariadb-clustercheck': {'container_name': 'mariadb_clustercheck', 'group': '{{ mariadb_shard_group }}', 'enabled': '{{ enable_mariadb_clustercheck | bool }}', 'image': '{{ mariadb_clustercheck_image_full }}', 'volumes': '{{ mariadb_clustercheck_default_volumes + mariadb_clustercheck_extra_volumes }}', 'dimensions': '{{ mariadb_clustercheck_dimensions }}', 'environment': {'MYSQL_USERNAME': '{{ mariadb_monitor_user }}', 'MYSQL_PASSWORD': '{% if enable_proxysql | bool %}{{ mariadb_monitor_password }}{% endif %}', 'MYSQL_HOST': '{{ api_interface_address }}', 'AVAILABLE_WHEN_DONOR': '1'}}}: ['{{ node_config_directory }}/mariadb/:{{ container_config_directory }}/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', \"{{ '/etc/timezone:/etc/timezone:ro' if ansible_facts.os_family == 'Debian' else '' }}\", '{{ mariadb_datadir_volume }}:/var/lib/mysql', 'kolla_logs:/var/log/kolla/']: 'dict object' has no attribute 'os_family'"} 2026-02-20 04:40:44.426259 | orchestrator | 2026-02-20 04:40:44 | INFO  | Task 1393b403-7dba-4599-b77c-ef0165422246 (mariadb) was prepared for execution. 2026-02-20 04:40:44.426352 | orchestrator | 2026-02-20 04:40:44 | INFO  | It takes a moment until task 1393b403-7dba-4599-b77c-ef0165422246 (mariadb) has been started and output is visible here. 2026-02-20 04:41:03.232602 | orchestrator | 2026-02-20 04:41:03.232694 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:41:03.232715 | orchestrator | testbed-node-0 : ok=4  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-02-20 04:41:03.232736 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-02-20 04:41:03.232741 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-02-20 04:41:03.232746 | orchestrator | 2026-02-20 04:41:03.232752 | orchestrator | 2026-02-20 04:41:03.232757 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 04:41:03.232762 | orchestrator | 2026-02-20 04:41:03.232767 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 04:41:03.232772 | orchestrator | Friday 20 February 2026 04:40:49 +0000 (0:00:01.294) 0:00:01.294 ******* 2026-02-20 04:41:03.232777 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:41:03.232783 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:41:03.232833 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:41:03.232841 | orchestrator | 2026-02-20 04:41:03.232849 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 04:41:03.232887 | orchestrator | Friday 20 February 2026 04:40:51 +0000 (0:00:01.748) 0:00:03.042 ******* 2026-02-20 04:41:03.232892 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-20 04:41:03.232898 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-20 04:41:03.232903 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-20 04:41:03.232908 | orchestrator | 2026-02-20 04:41:03.232913 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-20 04:41:03.232917 | orchestrator | 2026-02-20 04:41:03.232922 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-20 04:41:03.232927 | orchestrator | Friday 20 February 2026 04:40:52 +0000 (0:00:01.658) 0:00:04.700 ******* 2026-02-20 04:41:03.232933 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 04:41:03.232938 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-20 04:41:03.232943 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-20 04:41:03.232948 | orchestrator | 2026-02-20 04:41:03.232953 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-20 04:41:03.232958 | orchestrator | Friday 20 February 2026 04:40:54 +0000 (0:00:01.384) 0:00:06.085 ******* 2026-02-20 04:41:03.232963 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:41:03.232969 | orchestrator | 2026-02-20 04:41:03.232974 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-20 04:41:03.232979 | orchestrator | Friday 20 February 2026 04:40:56 +0000 (0:00:01.747) 0:00:07.833 ******* 2026-02-20 04:41:03.232988 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 04:41:03.233031 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 04:41:03.233038 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 04:41:03.233048 | orchestrator | 2026-02-20 04:41:03.233053 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-20 04:41:03.233058 | orchestrator | Friday 20 February 2026 04:40:59 +0000 (0:00:03.453) 0:00:11.287 ******* 2026-02-20 04:41:03.233063 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:41:03.233069 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:41:03.233073 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:41:03.233078 | orchestrator | 2026-02-20 04:41:03.233083 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-20 04:41:03.233088 | orchestrator | Friday 20 February 2026 04:41:01 +0000 (0:00:01.542) 0:00:12.829 ******* 2026-02-20 04:41:03.233093 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:41:03.233098 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:41:03.233102 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:41:03.233107 | orchestrator | 2026-02-20 04:41:03.233115 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-20 04:41:19.574547 | orchestrator | Friday 20 February 2026 04:41:03 +0000 (0:00:02.106) 0:00:14.936 ******* 2026-02-20 04:41:19.574725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 04:41:19.574834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 04:41:19.574903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 04:41:19.574928 | orchestrator | 2026-02-20 04:41:19.574949 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-20 04:41:19.574982 | orchestrator | Friday 20 February 2026 04:41:07 +0000 (0:00:04.484) 0:00:19.421 ******* 2026-02-20 04:41:19.575001 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:41:19.575024 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:41:19.575044 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:41:19.575065 | orchestrator | 2026-02-20 04:41:19.575079 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-20 04:41:19.575093 | orchestrator | Friday 20 February 2026 04:41:09 +0000 (0:00:02.086) 0:00:21.507 ******* 2026-02-20 04:41:19.575106 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:41:19.575119 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:41:19.575132 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:41:19.575145 | orchestrator | 2026-02-20 04:41:19.575158 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-20 04:41:19.575171 | orchestrator | Friday 20 February 2026 04:41:14 +0000 (0:00:04.941) 0:00:26.449 ******* 2026-02-20 04:41:19.575184 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:41:19.575198 | orchestrator | 2026-02-20 04:41:19.575211 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-20 04:41:19.575224 | orchestrator | Friday 20 February 2026 04:41:16 +0000 (0:00:01.742) 0:00:28.192 ******* 2026-02-20 04:41:19.575256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:41:23.006010 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:41:23.006221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:41:23.006294 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:41:23.006309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:41:23.006321 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:41:23.006333 | orchestrator | 2026-02-20 04:41:23.006346 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-20 04:41:23.006373 | orchestrator | Friday 20 February 2026 04:41:19 +0000 (0:00:03.093) 0:00:31.285 ******* 2026-02-20 04:41:23.006410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:41:23.006431 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:41:23.006443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:41:23.006456 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:41:23.006483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:41:31.669344 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:41:31.669460 | orchestrator | 2026-02-20 04:41:31.669477 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-20 04:41:31.669490 | orchestrator | Friday 20 February 2026 04:41:22 +0000 (0:00:03.431) 0:00:34.716 ******* 2026-02-20 04:41:31.669507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:41:31.669522 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:41:31.669551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:41:31.669587 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:41:31.669634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:41:31.669666 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:41:31.669686 | orchestrator | 2026-02-20 04:41:31.669704 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-02-20 04:41:31.669721 | orchestrator | Friday 20 February 2026 04:41:27 +0000 (0:00:04.112) 0:00:38.829 ******* 2026-02-20 04:41:31.669749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 04:41:31.669832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 04:41:36.999748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-20 04:41:36.999928 | orchestrator | 2026-02-20 04:41:36.999947 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-02-20 04:41:36.999960 | orchestrator | Friday 20 February 2026 04:41:31 +0000 (0:00:04.552) 0:00:43.382 ******* 2026-02-20 04:41:36.999973 | orchestrator | changed: [testbed-node-0] => { 2026-02-20 04:41:36.999985 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:41:36.999996 | orchestrator | } 2026-02-20 04:41:37.000007 | orchestrator | changed: [testbed-node-1] => { 2026-02-20 04:41:37.000019 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:41:37.000030 | orchestrator | } 2026-02-20 04:41:37.000041 | orchestrator | changed: [testbed-node-2] => { 2026-02-20 04:41:37.000052 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:41:37.000063 | orchestrator | } 2026-02-20 04:41:37.000074 | orchestrator | 2026-02-20 04:41:37.000085 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-20 04:41:37.000096 | orchestrator | Friday 20 February 2026 04:41:33 +0000 (0:00:01.361) 0:00:44.743 ******* 2026-02-20 04:41:37.000145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:41:37.000174 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:41:37.000214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:41:37.000249 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:41:37.000269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:41:37.000289 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:41:37.000307 | orchestrator | 2026-02-20 04:41:37.000325 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-02-20 04:41:37.000364 | orchestrator | Friday 20 February 2026 04:41:36 +0000 (0:00:03.961) 0:00:48.704 ******* 2026-02-20 04:42:03.871861 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.872000 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:03.872026 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:03.872046 | orchestrator | 2026-02-20 04:42:03.872066 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-02-20 04:42:03.872085 | orchestrator | Friday 20 February 2026 04:41:38 +0000 (0:00:01.374) 0:00:50.079 ******* 2026-02-20 04:42:03.872132 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.872152 | orchestrator | 2026-02-20 04:42:03.872170 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-02-20 04:42:03.872188 | orchestrator | Friday 20 February 2026 04:41:39 +0000 (0:00:01.105) 0:00:51.185 ******* 2026-02-20 04:42:03.872206 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.872224 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:03.872243 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:03.872262 | orchestrator | 2026-02-20 04:42:03.872282 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-02-20 04:42:03.872301 | orchestrator | Friday 20 February 2026 04:41:40 +0000 (0:00:01.375) 0:00:52.560 ******* 2026-02-20 04:42:03.872319 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.872331 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:03.872343 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:03.872357 | orchestrator | 2026-02-20 04:42:03.872370 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-02-20 04:42:03.872399 | orchestrator | Friday 20 February 2026 04:41:42 +0000 (0:00:01.519) 0:00:54.080 ******* 2026-02-20 04:42:03.872412 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.872424 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:03.872437 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:03.872449 | orchestrator | 2026-02-20 04:42:03.872463 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-02-20 04:42:03.872475 | orchestrator | Friday 20 February 2026 04:41:43 +0000 (0:00:01.360) 0:00:55.441 ******* 2026-02-20 04:42:03.872487 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.872500 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:03.872512 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:03.872525 | orchestrator | 2026-02-20 04:42:03.872538 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-02-20 04:42:03.872548 | orchestrator | Friday 20 February 2026 04:41:45 +0000 (0:00:01.344) 0:00:56.785 ******* 2026-02-20 04:42:03.872559 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.872576 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:03.872603 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:03.872622 | orchestrator | 2026-02-20 04:42:03.872638 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-02-20 04:42:03.872656 | orchestrator | Friday 20 February 2026 04:41:46 +0000 (0:00:01.360) 0:00:58.146 ******* 2026-02-20 04:42:03.872675 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.872695 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:03.872712 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:03.872729 | orchestrator | 2026-02-20 04:42:03.872741 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-02-20 04:42:03.872752 | orchestrator | Friday 20 February 2026 04:41:47 +0000 (0:00:01.535) 0:00:59.682 ******* 2026-02-20 04:42:03.872763 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 04:42:03.872774 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 04:42:03.872784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 04:42:03.872825 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.872837 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-20 04:42:03.872848 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-20 04:42:03.872859 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-20 04:42:03.872870 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:03.872880 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-20 04:42:03.872891 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-20 04:42:03.872902 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-20 04:42:03.872913 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:03.872935 | orchestrator | 2026-02-20 04:42:03.872946 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-02-20 04:42:03.872957 | orchestrator | Friday 20 February 2026 04:41:49 +0000 (0:00:01.387) 0:01:01.069 ******* 2026-02-20 04:42:03.872968 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.872979 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:03.872989 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:03.873000 | orchestrator | 2026-02-20 04:42:03.873011 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-02-20 04:42:03.873022 | orchestrator | Friday 20 February 2026 04:41:50 +0000 (0:00:01.384) 0:01:02.454 ******* 2026-02-20 04:42:03.873032 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.873043 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:03.873054 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:03.873065 | orchestrator | 2026-02-20 04:42:03.873075 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-02-20 04:42:03.873092 | orchestrator | Friday 20 February 2026 04:41:52 +0000 (0:00:01.332) 0:01:03.787 ******* 2026-02-20 04:42:03.873118 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.873141 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:03.873158 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:03.873177 | orchestrator | 2026-02-20 04:42:03.873195 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-02-20 04:42:03.873215 | orchestrator | Friday 20 February 2026 04:41:53 +0000 (0:00:01.327) 0:01:05.114 ******* 2026-02-20 04:42:03.873267 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.873288 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:03.873306 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:03.873493 | orchestrator | 2026-02-20 04:42:03.873511 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-02-20 04:42:03.873546 | orchestrator | Friday 20 February 2026 04:41:54 +0000 (0:00:01.361) 0:01:06.476 ******* 2026-02-20 04:42:03.873558 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.873569 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:03.873580 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:03.873591 | orchestrator | 2026-02-20 04:42:03.873602 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-02-20 04:42:03.873612 | orchestrator | Friday 20 February 2026 04:41:56 +0000 (0:00:01.425) 0:01:07.902 ******* 2026-02-20 04:42:03.873623 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.873634 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:03.873645 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:03.873656 | orchestrator | 2026-02-20 04:42:03.873667 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-02-20 04:42:03.873678 | orchestrator | Friday 20 February 2026 04:41:57 +0000 (0:00:01.516) 0:01:09.418 ******* 2026-02-20 04:42:03.873688 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.873699 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:03.873710 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:03.873721 | orchestrator | 2026-02-20 04:42:03.873732 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-02-20 04:42:03.873742 | orchestrator | Friday 20 February 2026 04:41:59 +0000 (0:00:01.381) 0:01:10.800 ******* 2026-02-20 04:42:03.873754 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.873765 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:03.873776 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:03.873787 | orchestrator | 2026-02-20 04:42:03.873834 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-02-20 04:42:03.873847 | orchestrator | Friday 20 February 2026 04:42:00 +0000 (0:00:01.472) 0:01:12.272 ******* 2026-02-20 04:42:03.873864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:42:03.873892 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:03.873916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:42:08.933063 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:08.933204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:42:08.934227 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:08.934352 | orchestrator | 2026-02-20 04:42:08.934371 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-02-20 04:42:08.934385 | orchestrator | Friday 20 February 2026 04:42:03 +0000 (0:00:03.306) 0:01:15.579 ******* 2026-02-20 04:42:08.934396 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:08.934407 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:08.934419 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:08.934430 | orchestrator | 2026-02-20 04:42:08.934441 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-02-20 04:42:08.934453 | orchestrator | Friday 20 February 2026 04:42:05 +0000 (0:00:01.564) 0:01:17.143 ******* 2026-02-20 04:42:08.934498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:42:08.934515 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:42:08.934563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:42:08.934576 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:42:08.934588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-20 04:42:08.934600 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:42:08.934612 | orchestrator | 2026-02-20 04:42:08.934623 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-02-20 04:42:08.934635 | orchestrator | Friday 20 February 2026 04:42:08 +0000 (0:00:03.270) 0:01:20.413 ******* 2026-02-20 04:42:08.934663 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:44:33.040601 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:44:33.040702 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:44:33.040713 | orchestrator | 2026-02-20 04:44:33.040721 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-20 04:44:33.040736 | orchestrator | Friday 20 February 2026 04:42:10 +0000 (0:00:01.721) 0:01:22.135 ******* 2026-02-20 04:44:33.040756 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:44:33.040765 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:44:33.040771 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:44:33.040827 | orchestrator | 2026-02-20 04:44:33.040833 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-20 04:44:33.040839 | orchestrator | Friday 20 February 2026 04:42:11 +0000 (0:00:01.344) 0:01:23.480 ******* 2026-02-20 04:44:33.040843 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:44:33.040847 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:44:33.040851 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:44:33.040855 | orchestrator | 2026-02-20 04:44:33.040859 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-20 04:44:33.040864 | orchestrator | Friday 20 February 2026 04:42:13 +0000 (0:00:01.454) 0:01:24.935 ******* 2026-02-20 04:44:33.040869 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:44:33.040875 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:44:33.040881 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:44:33.040887 | orchestrator | 2026-02-20 04:44:33.040893 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-20 04:44:33.040898 | orchestrator | Friday 20 February 2026 04:42:14 +0000 (0:00:01.767) 0:01:26.703 ******* 2026-02-20 04:44:33.040904 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:44:33.040910 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:44:33.040916 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:44:33.040922 | orchestrator | 2026-02-20 04:44:33.040928 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-20 04:44:33.040934 | orchestrator | Friday 20 February 2026 04:42:16 +0000 (0:00:01.946) 0:01:28.649 ******* 2026-02-20 04:44:33.040940 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:44:33.040948 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:44:33.040955 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:44:33.040961 | orchestrator | 2026-02-20 04:44:33.040969 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-20 04:44:33.040973 | orchestrator | Friday 20 February 2026 04:42:18 +0000 (0:00:01.910) 0:01:30.561 ******* 2026-02-20 04:44:33.040977 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:44:33.040981 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:44:33.040985 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:44:33.040989 | orchestrator | 2026-02-20 04:44:33.040993 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-20 04:44:33.040997 | orchestrator | Friday 20 February 2026 04:42:20 +0000 (0:00:01.350) 0:01:31.911 ******* 2026-02-20 04:44:33.041000 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:44:33.041004 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:44:33.041008 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:44:33.041012 | orchestrator | 2026-02-20 04:44:33.041016 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-20 04:44:33.041020 | orchestrator | Friday 20 February 2026 04:42:21 +0000 (0:00:01.395) 0:01:33.307 ******* 2026-02-20 04:44:33.041023 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:44:33.041027 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:44:33.041031 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:44:33.041035 | orchestrator | 2026-02-20 04:44:33.041039 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-20 04:44:33.041043 | orchestrator | Friday 20 February 2026 04:42:23 +0000 (0:00:02.045) 0:01:35.352 ******* 2026-02-20 04:44:33.041047 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:44:33.041067 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:44:33.041071 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:44:33.041075 | orchestrator | 2026-02-20 04:44:33.041079 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-20 04:44:33.041083 | orchestrator | Friday 20 February 2026 04:42:25 +0000 (0:00:01.466) 0:01:36.818 ******* 2026-02-20 04:44:33.041087 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:44:33.041090 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:44:33.041094 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:44:33.041098 | orchestrator | 2026-02-20 04:44:33.041102 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-20 04:44:33.041106 | orchestrator | Friday 20 February 2026 04:42:26 +0000 (0:00:01.385) 0:01:38.203 ******* 2026-02-20 04:44:33.041110 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:44:33.041113 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:44:33.041117 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:44:33.041121 | orchestrator | 2026-02-20 04:44:33.041125 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-20 04:44:33.041129 | orchestrator | Friday 20 February 2026 04:42:30 +0000 (0:00:03.952) 0:01:42.155 ******* 2026-02-20 04:44:33.041133 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:44:33.041136 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:44:33.041140 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:44:33.041144 | orchestrator | 2026-02-20 04:44:33.041148 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-20 04:44:33.041152 | orchestrator | Friday 20 February 2026 04:42:31 +0000 (0:00:01.415) 0:01:43.571 ******* 2026-02-20 04:44:33.041155 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:44:33.041159 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:44:33.041164 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:44:33.041168 | orchestrator | 2026-02-20 04:44:33.041173 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-20 04:44:33.041178 | orchestrator | Friday 20 February 2026 04:42:33 +0000 (0:00:01.304) 0:01:44.876 ******* 2026-02-20 04:44:33.041182 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:44:33.041187 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:44:33.041191 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:44:33.041196 | orchestrator | 2026-02-20 04:44:33.041203 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-20 04:44:33.041210 | orchestrator | Friday 20 February 2026 04:42:34 +0000 (0:00:01.727) 0:01:46.603 ******* 2026-02-20 04:44:33.041216 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:44:33.041222 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:44:33.041229 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:44:33.041249 | orchestrator | 2026-02-20 04:44:33.041256 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-20 04:44:33.041264 | orchestrator | Friday 20 February 2026 04:42:36 +0000 (0:00:01.558) 0:01:48.161 ******* 2026-02-20 04:44:33.041271 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:44:33.041277 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:44:33.041289 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:44:33.041296 | orchestrator | 2026-02-20 04:44:33.041302 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-20 04:44:33.041309 | orchestrator | Friday 20 February 2026 04:42:37 +0000 (0:00:01.494) 0:01:49.656 ******* 2026-02-20 04:44:33.041315 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:44:33.041324 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:44:33.041330 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:44:33.041338 | orchestrator | 2026-02-20 04:44:33.041345 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-20 04:44:33.041352 | orchestrator | Friday 20 February 2026 04:42:39 +0000 (0:00:01.643) 0:01:51.300 ******* 2026-02-20 04:44:33.041358 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:44:33.041365 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:44:33.041372 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:44:33.041386 | orchestrator | 2026-02-20 04:44:33.041393 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-20 04:44:33.041399 | orchestrator | 2026-02-20 04:44:33.041405 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-20 04:44:33.041412 | orchestrator | Friday 20 February 2026 04:42:41 +0000 (0:00:01.964) 0:01:53.264 ******* 2026-02-20 04:44:33.041418 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:44:33.041424 | orchestrator | 2026-02-20 04:44:33.041431 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-20 04:44:33.041438 | orchestrator | Friday 20 February 2026 04:43:06 +0000 (0:00:25.355) 0:02:18.619 ******* 2026-02-20 04:44:33.041444 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:44:33.041451 | orchestrator | 2026-02-20 04:44:33.041458 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-20 04:44:33.041463 | orchestrator | Friday 20 February 2026 04:43:12 +0000 (0:00:05.578) 0:02:24.197 ******* 2026-02-20 04:44:33.041467 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:44:33.041471 | orchestrator | 2026-02-20 04:44:33.041476 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-20 04:44:33.041480 | orchestrator | 2026-02-20 04:44:33.041485 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-20 04:44:33.041489 | orchestrator | Friday 20 February 2026 04:43:15 +0000 (0:00:02.926) 0:02:27.123 ******* 2026-02-20 04:44:33.041494 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:44:33.041498 | orchestrator | 2026-02-20 04:44:33.041503 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-20 04:44:33.041507 | orchestrator | Friday 20 February 2026 04:43:41 +0000 (0:00:26.194) 0:02:53.318 ******* 2026-02-20 04:44:33.041512 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2026-02-20 04:44:33.041518 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:44:33.041522 | orchestrator | 2026-02-20 04:44:33.041526 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-20 04:44:33.041530 | orchestrator | Friday 20 February 2026 04:43:49 +0000 (0:00:08.071) 0:03:01.389 ******* 2026-02-20 04:44:33.041534 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:44:33.041538 | orchestrator | 2026-02-20 04:44:33.041541 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-20 04:44:33.041545 | orchestrator | 2026-02-20 04:44:33.041549 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-20 04:44:33.041553 | orchestrator | Friday 20 February 2026 04:43:52 +0000 (0:00:02.927) 0:03:04.317 ******* 2026-02-20 04:44:33.041557 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:44:33.041561 | orchestrator | 2026-02-20 04:44:33.041564 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-20 04:44:33.041568 | orchestrator | Friday 20 February 2026 04:44:18 +0000 (0:00:25.810) 0:03:30.128 ******* 2026-02-20 04:44:33.041572 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:44:33.041576 | orchestrator | 2026-02-20 04:44:33.041580 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-20 04:44:33.041583 | orchestrator | Friday 20 February 2026 04:44:23 +0000 (0:00:05.312) 0:03:35.440 ******* 2026-02-20 04:44:33.041587 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-20 04:44:33.041591 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-20 04:44:33.041595 | orchestrator | mariadb_bootstrap_restart 2026-02-20 04:44:33.041599 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:44:33.041603 | orchestrator | 2026-02-20 04:44:33.041607 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-20 04:44:33.041610 | orchestrator | skipping: no hosts matched 2026-02-20 04:44:33.041614 | orchestrator | 2026-02-20 04:44:33.041618 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-20 04:44:33.041622 | orchestrator | skipping: no hosts matched 2026-02-20 04:44:33.041630 | orchestrator | 2026-02-20 04:44:33.041634 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-20 04:44:33.041637 | orchestrator | 2026-02-20 04:44:33.041641 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-20 04:44:33.041645 | orchestrator | Friday 20 February 2026 04:44:27 +0000 (0:00:04.143) 0:03:39.583 ******* 2026-02-20 04:44:33.041649 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:44:33.041653 | orchestrator | 2026-02-20 04:44:33.041657 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-20 04:44:33.041660 | orchestrator | Friday 20 February 2026 04:44:29 +0000 (0:00:01.880) 0:03:41.464 ******* 2026-02-20 04:44:33.041664 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:44:33.041668 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:44:33.041675 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:44:33.041681 | orchestrator | 2026-02-20 04:44:33.041687 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-20 04:44:33.041700 | orchestrator | Friday 20 February 2026 04:44:33 +0000 (0:00:03.282) 0:03:44.746 ******* 2026-02-20 04:45:18.604419 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:45:18.604529 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:45:18.604549 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:45:18.604558 | orchestrator | 2026-02-20 04:45:18.604567 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-20 04:45:18.604589 | orchestrator | Friday 20 February 2026 04:44:36 +0000 (0:00:03.249) 0:03:47.996 ******* 2026-02-20 04:45:18.604597 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:45:18.604605 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:45:18.604613 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:45:18.604621 | orchestrator | 2026-02-20 04:45:18.604629 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-20 04:45:18.604640 | orchestrator | Friday 20 February 2026 04:44:39 +0000 (0:00:03.149) 0:03:51.146 ******* 2026-02-20 04:45:18.604653 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:45:18.604662 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:45:18.604669 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:45:18.604677 | orchestrator | 2026-02-20 04:45:18.604684 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-20 04:45:18.604692 | orchestrator | Friday 20 February 2026 04:44:42 +0000 (0:00:03.497) 0:03:54.643 ******* 2026-02-20 04:45:18.604699 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:45:18.604707 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:45:18.604716 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:45:18.604729 | orchestrator | 2026-02-20 04:45:18.604736 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-20 04:45:18.604745 | orchestrator | Friday 20 February 2026 04:44:48 +0000 (0:00:06.038) 0:04:00.682 ******* 2026-02-20 04:45:18.604752 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:45:18.604759 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:45:18.604767 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:45:18.604845 | orchestrator | 2026-02-20 04:45:18.604854 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-20 04:45:18.604862 | orchestrator | Friday 20 February 2026 04:44:52 +0000 (0:00:03.189) 0:04:03.872 ******* 2026-02-20 04:45:18.604875 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:45:18.604887 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:45:18.604899 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:45:18.604912 | orchestrator | 2026-02-20 04:45:18.604925 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-20 04:45:18.604938 | orchestrator | Friday 20 February 2026 04:44:53 +0000 (0:00:01.481) 0:04:05.353 ******* 2026-02-20 04:45:18.604951 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:45:18.604965 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:45:18.604978 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:45:18.604989 | orchestrator | 2026-02-20 04:45:18.605022 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-20 04:45:18.605036 | orchestrator | Friday 20 February 2026 04:44:57 +0000 (0:00:03.577) 0:04:08.930 ******* 2026-02-20 04:45:18.605049 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:45:18.605061 | orchestrator | 2026-02-20 04:45:18.605074 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-02-20 04:45:18.605087 | orchestrator | Friday 20 February 2026 04:44:59 +0000 (0:00:01.943) 0:04:10.874 ******* 2026-02-20 04:45:18.605100 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:45:18.605112 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:45:18.605124 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:45:18.605134 | orchestrator | 2026-02-20 04:45:18.605142 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:45:18.605151 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-20 04:45:18.605159 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-20 04:45:18.605167 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-20 04:45:18.605174 | orchestrator | 2026-02-20 04:45:18.605181 | orchestrator | 2026-02-20 04:45:18.605188 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 04:45:18.605196 | orchestrator | Friday 20 February 2026 04:45:18 +0000 (0:00:19.015) 0:04:29.889 ******* 2026-02-20 04:45:18.605203 | orchestrator | =============================================================================== 2026-02-20 04:45:18.605210 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 77.36s 2026-02-20 04:45:18.605217 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 19.02s 2026-02-20 04:45:18.605228 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 18.96s 2026-02-20 04:45:18.605240 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 10.00s 2026-02-20 04:45:18.605252 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.04s 2026-02-20 04:45:18.605266 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.94s 2026-02-20 04:45:18.605279 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.55s 2026-02-20 04:45:18.605291 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.48s 2026-02-20 04:45:18.605304 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.11s 2026-02-20 04:45:18.605312 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.96s 2026-02-20 04:45:18.605319 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.95s 2026-02-20 04:45:18.605327 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.58s 2026-02-20 04:45:18.605349 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 3.50s 2026-02-20 04:45:18.605357 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.45s 2026-02-20 04:45:18.605364 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.43s 2026-02-20 04:45:18.605378 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.31s 2026-02-20 04:45:18.605385 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 3.28s 2026-02-20 04:45:18.605393 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.27s 2026-02-20 04:45:18.605400 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.25s 2026-02-20 04:45:18.605407 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 3.19s 2026-02-20 04:45:18.895105 | orchestrator | + osism apply -a upgrade rabbitmq 2026-02-20 04:45:20.905977 | orchestrator | 2026-02-20 04:45:20 | INFO  | Task 90cb1b15-3ac3-4c90-93c1-0b82a1a2cf1b (rabbitmq) was prepared for execution. 2026-02-20 04:45:20.906120 | orchestrator | 2026-02-20 04:45:20 | INFO  | It takes a moment until task 90cb1b15-3ac3-4c90-93c1-0b82a1a2cf1b (rabbitmq) has been started and output is visible here. 2026-02-20 04:46:03.900434 | orchestrator | 2026-02-20 04:46:03.900555 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 04:46:03.900574 | orchestrator | 2026-02-20 04:46:03.900586 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 04:46:03.900598 | orchestrator | Friday 20 February 2026 04:45:26 +0000 (0:00:01.704) 0:00:01.704 ******* 2026-02-20 04:46:03.900609 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:46:03.900621 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:46:03.900632 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:46:03.900643 | orchestrator | 2026-02-20 04:46:03.900654 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 04:46:03.900665 | orchestrator | Friday 20 February 2026 04:45:28 +0000 (0:00:01.583) 0:00:03.288 ******* 2026-02-20 04:46:03.900676 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-20 04:46:03.900688 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-20 04:46:03.900699 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-20 04:46:03.900710 | orchestrator | 2026-02-20 04:46:03.900721 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-20 04:46:03.900732 | orchestrator | 2026-02-20 04:46:03.900743 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-20 04:46:03.900753 | orchestrator | Friday 20 February 2026 04:45:30 +0000 (0:00:02.036) 0:00:05.324 ******* 2026-02-20 04:46:03.900765 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:46:03.900829 | orchestrator | 2026-02-20 04:46:03.900841 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-20 04:46:03.900852 | orchestrator | Friday 20 February 2026 04:45:32 +0000 (0:00:02.482) 0:00:07.806 ******* 2026-02-20 04:46:03.900863 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:46:03.900874 | orchestrator | 2026-02-20 04:46:03.900885 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-20 04:46:03.900896 | orchestrator | Friday 20 February 2026 04:45:34 +0000 (0:00:02.121) 0:00:09.928 ******* 2026-02-20 04:46:03.900908 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:46:03.900919 | orchestrator | 2026-02-20 04:46:03.900930 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-20 04:46:03.900941 | orchestrator | Friday 20 February 2026 04:45:38 +0000 (0:00:03.032) 0:00:12.961 ******* 2026-02-20 04:46:03.900952 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:46:03.900964 | orchestrator | 2026-02-20 04:46:03.900976 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-20 04:46:03.900990 | orchestrator | Friday 20 February 2026 04:45:47 +0000 (0:00:09.941) 0:00:22.902 ******* 2026-02-20 04:46:03.901004 | orchestrator | ok: [testbed-node-0] => { 2026-02-20 04:46:03.901016 | orchestrator |  "changed": false, 2026-02-20 04:46:03.901029 | orchestrator |  "msg": "All assertions passed" 2026-02-20 04:46:03.901048 | orchestrator | } 2026-02-20 04:46:03.901068 | orchestrator | 2026-02-20 04:46:03.901088 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-20 04:46:03.901106 | orchestrator | Friday 20 February 2026 04:45:49 +0000 (0:00:01.380) 0:00:24.283 ******* 2026-02-20 04:46:03.901126 | orchestrator | ok: [testbed-node-0] => { 2026-02-20 04:46:03.901146 | orchestrator |  "changed": false, 2026-02-20 04:46:03.901166 | orchestrator |  "msg": "All assertions passed" 2026-02-20 04:46:03.901186 | orchestrator | } 2026-02-20 04:46:03.901205 | orchestrator | 2026-02-20 04:46:03.901225 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-20 04:46:03.901276 | orchestrator | Friday 20 February 2026 04:45:50 +0000 (0:00:01.663) 0:00:25.946 ******* 2026-02-20 04:46:03.901299 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:46:03.901316 | orchestrator | 2026-02-20 04:46:03.901329 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-20 04:46:03.901342 | orchestrator | Friday 20 February 2026 04:45:52 +0000 (0:00:01.770) 0:00:27.717 ******* 2026-02-20 04:46:03.901353 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:46:03.901364 | orchestrator | 2026-02-20 04:46:03.901375 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-20 04:46:03.901385 | orchestrator | Friday 20 February 2026 04:45:55 +0000 (0:00:02.250) 0:00:29.967 ******* 2026-02-20 04:46:03.901396 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:46:03.901407 | orchestrator | 2026-02-20 04:46:03.901418 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-20 04:46:03.901429 | orchestrator | Friday 20 February 2026 04:45:57 +0000 (0:00:02.840) 0:00:32.808 ******* 2026-02-20 04:46:03.901440 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:46:03.901451 | orchestrator | 2026-02-20 04:46:03.901462 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-20 04:46:03.901473 | orchestrator | Friday 20 February 2026 04:45:59 +0000 (0:00:01.901) 0:00:34.710 ******* 2026-02-20 04:46:03.901528 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 04:46:03.901547 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 04:46:03.901561 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 04:46:03.901582 | orchestrator | 2026-02-20 04:46:03.901594 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-20 04:46:03.901605 | orchestrator | Friday 20 February 2026 04:46:01 +0000 (0:00:01.748) 0:00:36.458 ******* 2026-02-20 04:46:03.901623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 04:46:03.901645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 04:46:23.108166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 04:46:23.108365 | orchestrator | 2026-02-20 04:46:23.108408 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-20 04:46:23.108429 | orchestrator | Friday 20 February 2026 04:46:03 +0000 (0:00:02.390) 0:00:38.849 ******* 2026-02-20 04:46:23.108441 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-20 04:46:23.108453 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-20 04:46:23.108464 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-20 04:46:23.108476 | orchestrator | 2026-02-20 04:46:23.108487 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-20 04:46:23.108498 | orchestrator | Friday 20 February 2026 04:46:06 +0000 (0:00:02.408) 0:00:41.258 ******* 2026-02-20 04:46:23.108509 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-20 04:46:23.108519 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-20 04:46:23.108530 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-20 04:46:23.108541 | orchestrator | 2026-02-20 04:46:23.108552 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-20 04:46:23.108563 | orchestrator | Friday 20 February 2026 04:46:09 +0000 (0:00:02.947) 0:00:44.205 ******* 2026-02-20 04:46:23.108574 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-20 04:46:23.108584 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-20 04:46:23.108595 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-20 04:46:23.108606 | orchestrator | 2026-02-20 04:46:23.108617 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-20 04:46:23.108628 | orchestrator | Friday 20 February 2026 04:46:11 +0000 (0:00:02.293) 0:00:46.498 ******* 2026-02-20 04:46:23.108638 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-20 04:46:23.108664 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-20 04:46:23.108677 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-20 04:46:23.108690 | orchestrator | 2026-02-20 04:46:23.108703 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-20 04:46:23.108716 | orchestrator | Friday 20 February 2026 04:46:14 +0000 (0:00:02.499) 0:00:48.998 ******* 2026-02-20 04:46:23.108729 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-20 04:46:23.108742 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-20 04:46:23.108754 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-20 04:46:23.108765 | orchestrator | 2026-02-20 04:46:23.108806 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-20 04:46:23.108826 | orchestrator | Friday 20 February 2026 04:46:16 +0000 (0:00:02.376) 0:00:51.374 ******* 2026-02-20 04:46:23.108839 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-20 04:46:23.108850 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-20 04:46:23.108861 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-20 04:46:23.108871 | orchestrator | 2026-02-20 04:46:23.108882 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-20 04:46:23.108893 | orchestrator | Friday 20 February 2026 04:46:19 +0000 (0:00:02.594) 0:00:53.968 ******* 2026-02-20 04:46:23.108915 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:46:23.108926 | orchestrator | 2026-02-20 04:46:23.108956 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-02-20 04:46:23.108968 | orchestrator | Friday 20 February 2026 04:46:20 +0000 (0:00:01.660) 0:00:55.629 ******* 2026-02-20 04:46:23.108981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 04:46:23.108996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 04:46:23.109015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 04:46:23.109028 | orchestrator | 2026-02-20 04:46:23.109039 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-02-20 04:46:23.109050 | orchestrator | Friday 20 February 2026 04:46:22 +0000 (0:00:02.302) 0:00:57.932 ******* 2026-02-20 04:46:23.109071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-20 04:46:33.091422 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:46:33.091536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-20 04:46:33.091555 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:46:33.091566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-20 04:46:33.091594 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:46:33.091605 | orchestrator | 2026-02-20 04:46:33.091616 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-02-20 04:46:33.091628 | orchestrator | Friday 20 February 2026 04:46:24 +0000 (0:00:01.414) 0:00:59.346 ******* 2026-02-20 04:46:33.091638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-20 04:46:33.091692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-20 04:46:33.091705 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:46:33.091716 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:46:33.091726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-20 04:46:33.091737 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:46:33.091746 | orchestrator | 2026-02-20 04:46:33.091756 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-20 04:46:33.091766 | orchestrator | Friday 20 February 2026 04:46:26 +0000 (0:00:01.924) 0:01:01.270 ******* 2026-02-20 04:46:33.091885 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:46:33.091897 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:46:33.091907 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:46:33.091917 | orchestrator | 2026-02-20 04:46:33.091927 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-02-20 04:46:33.091938 | orchestrator | Friday 20 February 2026 04:46:30 +0000 (0:00:04.503) 0:01:05.774 ******* 2026-02-20 04:46:33.091951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 04:46:33.092064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 04:48:24.843388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-20 04:48:24.843521 | orchestrator | 2026-02-20 04:48:24.843549 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-02-20 04:48:24.843572 | orchestrator | Friday 20 February 2026 04:46:33 +0000 (0:00:02.264) 0:01:08.039 ******* 2026-02-20 04:48:24.843593 | orchestrator | changed: [testbed-node-0] => { 2026-02-20 04:48:24.843615 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:48:24.843635 | orchestrator | } 2026-02-20 04:48:24.843655 | orchestrator | changed: [testbed-node-1] => { 2026-02-20 04:48:24.843668 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:48:24.843679 | orchestrator | } 2026-02-20 04:48:24.843690 | orchestrator | changed: [testbed-node-2] => { 2026-02-20 04:48:24.843701 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:48:24.843712 | orchestrator | } 2026-02-20 04:48:24.843724 | orchestrator | 2026-02-20 04:48:24.843736 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-20 04:48:24.843748 | orchestrator | Friday 20 February 2026 04:46:34 +0000 (0:00:01.389) 0:01:09.428 ******* 2026-02-20 04:48:24.843840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-20 04:48:24.843857 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:48:24.843869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-20 04:48:24.843888 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:48:24.843934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-20 04:48:24.843958 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:48:24.843978 | orchestrator | 2026-02-20 04:48:24.843999 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-20 04:48:24.844020 | orchestrator | Friday 20 February 2026 04:46:36 +0000 (0:00:01.999) 0:01:11.428 ******* 2026-02-20 04:48:24.844040 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:48:24.844055 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:48:24.844068 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:48:24.844081 | orchestrator | 2026-02-20 04:48:24.844094 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-20 04:48:24.844118 | orchestrator | 2026-02-20 04:48:24.844136 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-20 04:48:24.844156 | orchestrator | Friday 20 February 2026 04:46:38 +0000 (0:00:01.731) 0:01:13.159 ******* 2026-02-20 04:48:24.844175 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:48:24.844195 | orchestrator | 2026-02-20 04:48:24.844215 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-20 04:48:24.844234 | orchestrator | Friday 20 February 2026 04:46:40 +0000 (0:00:02.002) 0:01:15.161 ******* 2026-02-20 04:48:24.844254 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:48:24.844273 | orchestrator | 2026-02-20 04:48:24.844291 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-20 04:48:24.844309 | orchestrator | Friday 20 February 2026 04:46:50 +0000 (0:00:09.909) 0:01:25.071 ******* 2026-02-20 04:48:24.844328 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:48:24.844347 | orchestrator | 2026-02-20 04:48:24.844366 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-20 04:48:24.844394 | orchestrator | Friday 20 February 2026 04:46:59 +0000 (0:00:09.126) 0:01:34.198 ******* 2026-02-20 04:48:24.844414 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:48:24.844434 | orchestrator | 2026-02-20 04:48:24.844446 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-20 04:48:24.844457 | orchestrator | 2026-02-20 04:48:24.844468 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-20 04:48:24.844479 | orchestrator | Friday 20 February 2026 04:47:09 +0000 (0:00:10.202) 0:01:44.400 ******* 2026-02-20 04:48:24.844490 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:48:24.844501 | orchestrator | 2026-02-20 04:48:24.844512 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-20 04:48:24.844523 | orchestrator | Friday 20 February 2026 04:47:11 +0000 (0:00:01.754) 0:01:46.155 ******* 2026-02-20 04:48:24.844534 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:48:24.844545 | orchestrator | 2026-02-20 04:48:24.844556 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-20 04:48:24.844566 | orchestrator | Friday 20 February 2026 04:47:20 +0000 (0:00:09.340) 0:01:55.495 ******* 2026-02-20 04:48:24.844577 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:48:24.844588 | orchestrator | 2026-02-20 04:48:24.844599 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-20 04:48:24.844610 | orchestrator | Friday 20 February 2026 04:47:34 +0000 (0:00:14.072) 0:02:09.568 ******* 2026-02-20 04:48:24.844621 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:48:24.844632 | orchestrator | 2026-02-20 04:48:24.844647 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-20 04:48:24.844665 | orchestrator | 2026-02-20 04:48:24.844683 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-20 04:48:24.844702 | orchestrator | Friday 20 February 2026 04:47:44 +0000 (0:00:09.924) 0:02:19.492 ******* 2026-02-20 04:48:24.844721 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:48:24.844740 | orchestrator | 2026-02-20 04:48:24.844760 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-20 04:48:24.844806 | orchestrator | Friday 20 February 2026 04:47:46 +0000 (0:00:01.768) 0:02:21.260 ******* 2026-02-20 04:48:24.844818 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:48:24.844829 | orchestrator | 2026-02-20 04:48:24.844840 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-20 04:48:24.844851 | orchestrator | Friday 20 February 2026 04:47:57 +0000 (0:00:11.433) 0:02:32.694 ******* 2026-02-20 04:48:24.844862 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:48:24.844872 | orchestrator | 2026-02-20 04:48:24.844883 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-20 04:48:24.844894 | orchestrator | Friday 20 February 2026 04:48:12 +0000 (0:00:15.146) 0:02:47.841 ******* 2026-02-20 04:48:24.844905 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:48:24.844916 | orchestrator | 2026-02-20 04:48:24.844935 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-20 04:48:24.844945 | orchestrator | 2026-02-20 04:48:24.844956 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-20 04:48:24.844978 | orchestrator | Friday 20 February 2026 04:48:24 +0000 (0:00:11.947) 0:02:59.788 ******* 2026-02-20 04:48:30.907531 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:48:30.907664 | orchestrator | 2026-02-20 04:48:30.907678 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-20 04:48:30.907688 | orchestrator | Friday 20 February 2026 04:48:26 +0000 (0:00:01.396) 0:03:01.185 ******* 2026-02-20 04:48:30.907698 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:48:30.907708 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:48:30.907717 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:48:30.907726 | orchestrator | 2026-02-20 04:48:30.907736 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:48:30.907746 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-20 04:48:30.907759 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-20 04:48:30.907816 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-20 04:48:30.907827 | orchestrator | 2026-02-20 04:48:30.907835 | orchestrator | 2026-02-20 04:48:30.907844 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 04:48:30.907853 | orchestrator | Friday 20 February 2026 04:48:30 +0000 (0:00:04.294) 0:03:05.480 ******* 2026-02-20 04:48:30.907862 | orchestrator | =============================================================================== 2026-02-20 04:48:30.907871 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 38.35s 2026-02-20 04:48:30.907880 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 32.07s 2026-02-20 04:48:30.907889 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 30.68s 2026-02-20 04:48:30.907898 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.94s 2026-02-20 04:48:30.907906 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.52s 2026-02-20 04:48:30.907915 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.50s 2026-02-20 04:48:30.907924 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.29s 2026-02-20 04:48:30.907933 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.03s 2026-02-20 04:48:30.907942 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.95s 2026-02-20 04:48:30.907950 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 2.84s 2026-02-20 04:48:30.907959 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.59s 2026-02-20 04:48:30.907990 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.50s 2026-02-20 04:48:30.907999 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.48s 2026-02-20 04:48:30.908008 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.41s 2026-02-20 04:48:30.908017 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.39s 2026-02-20 04:48:30.908025 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.38s 2026-02-20 04:48:30.908034 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.30s 2026-02-20 04:48:30.908043 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.29s 2026-02-20 04:48:30.908051 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 2.26s 2026-02-20 04:48:30.908084 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.25s 2026-02-20 04:48:31.197317 | orchestrator | + osism apply -a upgrade openvswitch 2026-02-20 04:48:33.183911 | orchestrator | 2026-02-20 04:48:33 | INFO  | Task 0232c1bb-be92-4111-9e17-31a96aaf41e1 (openvswitch) was prepared for execution. 2026-02-20 04:48:33.184032 | orchestrator | 2026-02-20 04:48:33 | INFO  | It takes a moment until task 0232c1bb-be92-4111-9e17-31a96aaf41e1 (openvswitch) has been started and output is visible here. 2026-02-20 04:48:58.569417 | orchestrator | 2026-02-20 04:48:58.569534 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 04:48:58.569551 | orchestrator | 2026-02-20 04:48:58.569562 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 04:48:58.569572 | orchestrator | Friday 20 February 2026 04:48:38 +0000 (0:00:01.383) 0:00:01.383 ******* 2026-02-20 04:48:58.569583 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:48:58.569600 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:48:58.569616 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:48:58.569634 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:48:58.569651 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:48:58.569667 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:48:58.569684 | orchestrator | 2026-02-20 04:48:58.569702 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 04:48:58.569719 | orchestrator | Friday 20 February 2026 04:48:41 +0000 (0:00:02.572) 0:00:03.955 ******* 2026-02-20 04:48:58.569736 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-20 04:48:58.569755 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-20 04:48:58.569841 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-20 04:48:58.569853 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-20 04:48:58.569863 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-20 04:48:58.569873 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-20 04:48:58.569883 | orchestrator | 2026-02-20 04:48:58.569894 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-20 04:48:58.569904 | orchestrator | 2026-02-20 04:48:58.569915 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-20 04:48:58.569925 | orchestrator | Friday 20 February 2026 04:48:44 +0000 (0:00:03.367) 0:00:07.323 ******* 2026-02-20 04:48:58.569936 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 04:48:58.569947 | orchestrator | 2026-02-20 04:48:58.569958 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-20 04:48:58.569969 | orchestrator | Friday 20 February 2026 04:48:46 +0000 (0:00:01.970) 0:00:09.293 ******* 2026-02-20 04:48:58.569981 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-20 04:48:58.569993 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-20 04:48:58.570004 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-20 04:48:58.570074 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-20 04:48:58.570086 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-20 04:48:58.570098 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-20 04:48:58.570109 | orchestrator | 2026-02-20 04:48:58.570120 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-20 04:48:58.570131 | orchestrator | Friday 20 February 2026 04:48:48 +0000 (0:00:02.078) 0:00:11.372 ******* 2026-02-20 04:48:58.570142 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-20 04:48:58.570154 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-20 04:48:58.570165 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-20 04:48:58.570203 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-20 04:48:58.570213 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-20 04:48:58.570223 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-20 04:48:58.570233 | orchestrator | 2026-02-20 04:48:58.570243 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-20 04:48:58.570253 | orchestrator | Friday 20 February 2026 04:48:51 +0000 (0:00:02.705) 0:00:14.077 ******* 2026-02-20 04:48:58.570262 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-20 04:48:58.570272 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:48:58.570283 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-20 04:48:58.570292 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:48:58.570302 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-20 04:48:58.570312 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:48:58.570321 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-20 04:48:58.570345 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:48:58.570362 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-20 04:48:58.570379 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:48:58.570396 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-20 04:48:58.570412 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:48:58.570428 | orchestrator | 2026-02-20 04:48:58.570445 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-20 04:48:58.570464 | orchestrator | Friday 20 February 2026 04:48:53 +0000 (0:00:02.324) 0:00:16.402 ******* 2026-02-20 04:48:58.570481 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:48:58.570498 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:48:58.570515 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:48:58.570532 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:48:58.570549 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:48:58.570565 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:48:58.570576 | orchestrator | 2026-02-20 04:48:58.570585 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-20 04:48:58.570595 | orchestrator | Friday 20 February 2026 04:48:55 +0000 (0:00:02.122) 0:00:18.524 ******* 2026-02-20 04:48:58.570629 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:48:58.570647 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:48:58.570658 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:48:58.570678 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:48:58.570695 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:48:58.570707 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:48:58.570726 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:00.858753 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:00.858966 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:00.858984 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:49:00.859013 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:00.859026 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:00.859038 | orchestrator | 2026-02-20 04:49:00.859055 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-20 04:49:00.859081 | orchestrator | Friday 20 February 2026 04:48:58 +0000 (0:00:02.630) 0:00:21.154 ******* 2026-02-20 04:49:00.859133 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:49:00.859166 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:49:00.859185 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:49:00.859213 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:49:00.859233 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:49:00.859253 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:49:00.859286 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:06.378623 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:06.378751 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:06.378864 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:06.378880 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:06.378893 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:06.378924 | orchestrator | 2026-02-20 04:49:06.378938 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-20 04:49:06.378950 | orchestrator | Friday 20 February 2026 04:49:01 +0000 (0:00:03.402) 0:00:24.557 ******* 2026-02-20 04:49:06.378961 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:49:06.378974 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:49:06.378985 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:49:06.378996 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:49:06.379007 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:49:06.379018 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:49:06.379029 | orchestrator | 2026-02-20 04:49:06.379041 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-02-20 04:49:06.379070 | orchestrator | Friday 20 February 2026 04:49:04 +0000 (0:00:02.280) 0:00:26.837 ******* 2026-02-20 04:49:06.379083 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:49:06.379096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:49:06.379114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:49:06.379128 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:49:06.379142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:49:06.379173 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-20 04:49:09.896818 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:09.896917 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:09.896926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:09.896933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:09.896953 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:09.896972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-20 04:49:09.896979 | orchestrator | 2026-02-20 04:49:09.896985 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-02-20 04:49:09.896992 | orchestrator | Friday 20 February 2026 04:49:07 +0000 (0:00:03.293) 0:00:30.130 ******* 2026-02-20 04:49:09.896999 | orchestrator | changed: [testbed-node-0] => { 2026-02-20 04:49:09.897006 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:49:09.897011 | orchestrator | } 2026-02-20 04:49:09.897016 | orchestrator | changed: [testbed-node-1] => { 2026-02-20 04:49:09.897022 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:49:09.897027 | orchestrator | } 2026-02-20 04:49:09.897032 | orchestrator | changed: [testbed-node-2] => { 2026-02-20 04:49:09.897037 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:49:09.897042 | orchestrator | } 2026-02-20 04:49:09.897047 | orchestrator | changed: [testbed-node-3] => { 2026-02-20 04:49:09.897053 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:49:09.897058 | orchestrator | } 2026-02-20 04:49:09.897063 | orchestrator | changed: [testbed-node-4] => { 2026-02-20 04:49:09.897069 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:49:09.897074 | orchestrator | } 2026-02-20 04:49:09.897080 | orchestrator | changed: [testbed-node-5] => { 2026-02-20 04:49:09.897085 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:49:09.897090 | orchestrator | } 2026-02-20 04:49:09.897095 | orchestrator | 2026-02-20 04:49:09.897101 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-20 04:49:09.897106 | orchestrator | Friday 20 February 2026 04:49:09 +0000 (0:00:01.893) 0:00:32.024 ******* 2026-02-20 04:49:09.897115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-20 04:49:09.897126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-20 04:49:09.897131 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:49:09.897137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-20 04:49:09.897142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-20 04:49:09.897151 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:49:41.768415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-20 04:49:41.768537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-20 04:49:41.768552 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:49:41.768608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-20 04:49:41.768618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-20 04:49:41.768628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-20 04:49:41.768653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-20 04:49:41.768663 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:49:41.768673 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:49:41.768682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-20 04:49:41.768695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-20 04:49:41.768711 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:49:41.768721 | orchestrator | 2026-02-20 04:49:41.768730 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-20 04:49:41.768741 | orchestrator | Friday 20 February 2026 04:49:12 +0000 (0:00:02.636) 0:00:34.661 ******* 2026-02-20 04:49:41.768749 | orchestrator | 2026-02-20 04:49:41.768758 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-20 04:49:41.768824 | orchestrator | Friday 20 February 2026 04:49:12 +0000 (0:00:00.487) 0:00:35.148 ******* 2026-02-20 04:49:41.768835 | orchestrator | 2026-02-20 04:49:41.768843 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-20 04:49:41.768858 | orchestrator | Friday 20 February 2026 04:49:13 +0000 (0:00:00.502) 0:00:35.651 ******* 2026-02-20 04:49:41.768872 | orchestrator | 2026-02-20 04:49:41.768892 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-20 04:49:41.768910 | orchestrator | Friday 20 February 2026 04:49:13 +0000 (0:00:00.508) 0:00:36.159 ******* 2026-02-20 04:49:41.768925 | orchestrator | 2026-02-20 04:49:41.768939 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-20 04:49:41.768952 | orchestrator | Friday 20 February 2026 04:49:14 +0000 (0:00:00.680) 0:00:36.840 ******* 2026-02-20 04:49:41.768966 | orchestrator | 2026-02-20 04:49:41.768980 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-20 04:49:41.768996 | orchestrator | Friday 20 February 2026 04:49:14 +0000 (0:00:00.521) 0:00:37.361 ******* 2026-02-20 04:49:41.769010 | orchestrator | 2026-02-20 04:49:41.769026 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-20 04:49:41.769042 | orchestrator | Friday 20 February 2026 04:49:15 +0000 (0:00:00.865) 0:00:38.226 ******* 2026-02-20 04:49:41.769057 | orchestrator | changed: [testbed-node-5] 2026-02-20 04:49:41.769073 | orchestrator | changed: [testbed-node-3] 2026-02-20 04:49:41.769089 | orchestrator | changed: [testbed-node-4] 2026-02-20 04:49:41.769105 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:49:41.769127 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:49:41.769143 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:49:41.769158 | orchestrator | 2026-02-20 04:49:41.769172 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-20 04:49:41.769187 | orchestrator | Friday 20 February 2026 04:49:27 +0000 (0:00:12.088) 0:00:50.315 ******* 2026-02-20 04:49:41.769200 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:49:41.769216 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:49:41.769231 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:49:41.769246 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:49:41.769261 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:49:41.769277 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:49:41.769293 | orchestrator | 2026-02-20 04:49:41.769306 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-20 04:49:41.769316 | orchestrator | Friday 20 February 2026 04:49:30 +0000 (0:00:02.323) 0:00:52.639 ******* 2026-02-20 04:49:41.769325 | orchestrator | changed: [testbed-node-5] 2026-02-20 04:49:41.769333 | orchestrator | changed: [testbed-node-3] 2026-02-20 04:49:41.769342 | orchestrator | changed: [testbed-node-4] 2026-02-20 04:49:41.769350 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:49:41.769359 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:49:41.769368 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:49:41.769376 | orchestrator | 2026-02-20 04:49:41.769394 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-20 04:49:41.769414 | orchestrator | Friday 20 February 2026 04:49:41 +0000 (0:00:11.714) 0:01:04.354 ******* 2026-02-20 04:49:57.812047 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-20 04:49:57.812167 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-20 04:49:57.812183 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-20 04:49:57.812195 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-20 04:49:57.812206 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-20 04:49:57.812218 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-20 04:49:57.812229 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-20 04:49:57.812240 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-20 04:49:57.812251 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-20 04:49:57.812262 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-20 04:49:57.812274 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-20 04:49:57.812302 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-20 04:49:57.812314 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-20 04:49:57.812325 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-20 04:49:57.812336 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-20 04:49:57.812347 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-20 04:49:57.812358 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-20 04:49:57.812369 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-20 04:49:57.812381 | orchestrator | 2026-02-20 04:49:57.812393 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-20 04:49:57.812405 | orchestrator | Friday 20 February 2026 04:49:49 +0000 (0:00:07.809) 0:01:12.163 ******* 2026-02-20 04:49:57.812417 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-20 04:49:57.812428 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:49:57.812441 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-20 04:49:57.812452 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:49:57.812468 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-20 04:49:57.812488 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:49:57.812520 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-02-20 04:49:57.812539 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-02-20 04:49:57.812558 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-02-20 04:49:57.812580 | orchestrator | 2026-02-20 04:49:57.812601 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-20 04:49:57.812622 | orchestrator | Friday 20 February 2026 04:49:52 +0000 (0:00:03.426) 0:01:15.590 ******* 2026-02-20 04:49:57.812639 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-20 04:49:57.812674 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:49:57.812687 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-20 04:49:57.812699 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:49:57.812712 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-20 04:49:57.812724 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:49:57.812737 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-20 04:49:57.812749 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-20 04:49:57.812788 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-20 04:49:57.812805 | orchestrator | 2026-02-20 04:49:57.812818 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:49:57.812832 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-20 04:49:57.812846 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-20 04:49:57.812860 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-20 04:49:57.812872 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 04:49:57.812904 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 04:49:57.812923 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-20 04:49:57.812941 | orchestrator | 2026-02-20 04:49:57.812959 | orchestrator | 2026-02-20 04:49:57.812978 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 04:49:57.812996 | orchestrator | Friday 20 February 2026 04:49:57 +0000 (0:00:04.421) 0:01:20.011 ******* 2026-02-20 04:49:57.813009 | orchestrator | =============================================================================== 2026-02-20 04:49:57.813019 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.09s 2026-02-20 04:49:57.813030 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 11.71s 2026-02-20 04:49:57.813041 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.81s 2026-02-20 04:49:57.813052 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.42s 2026-02-20 04:49:57.813069 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.57s 2026-02-20 04:49:57.813089 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.43s 2026-02-20 04:49:57.813106 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.40s 2026-02-20 04:49:57.813118 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.37s 2026-02-20 04:49:57.813129 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.29s 2026-02-20 04:49:57.813148 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.71s 2026-02-20 04:49:57.813159 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.64s 2026-02-20 04:49:57.813170 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.63s 2026-02-20 04:49:57.813181 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.57s 2026-02-20 04:49:57.813192 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.33s 2026-02-20 04:49:57.813202 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.32s 2026-02-20 04:49:57.813213 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.28s 2026-02-20 04:49:57.813233 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.12s 2026-02-20 04:49:57.813244 | orchestrator | module-load : Load modules ---------------------------------------------- 2.08s 2026-02-20 04:49:57.813255 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.97s 2026-02-20 04:49:57.813266 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 1.89s 2026-02-20 04:49:58.100921 | orchestrator | + osism apply -a upgrade ovn 2026-02-20 04:50:00.275232 | orchestrator | 2026-02-20 04:50:00 | INFO  | Task 37e58297-e63c-4f44-b8f8-d62b57d0b0f3 (ovn) was prepared for execution. 2026-02-20 04:50:00.275307 | orchestrator | 2026-02-20 04:50:00 | INFO  | It takes a moment until task 37e58297-e63c-4f44-b8f8-d62b57d0b0f3 (ovn) has been started and output is visible here. 2026-02-20 04:50:20.817254 | orchestrator | 2026-02-20 04:50:20.817401 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-20 04:50:20.817429 | orchestrator | 2026-02-20 04:50:20.817449 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-20 04:50:20.817470 | orchestrator | Friday 20 February 2026 04:50:05 +0000 (0:00:01.318) 0:00:01.318 ******* 2026-02-20 04:50:20.817490 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:50:20.817512 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:50:20.817532 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:50:20.817552 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:50:20.817571 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:50:20.817591 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:50:20.817611 | orchestrator | 2026-02-20 04:50:20.817631 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-20 04:50:20.817652 | orchestrator | Friday 20 February 2026 04:50:08 +0000 (0:00:02.468) 0:00:03.786 ******* 2026-02-20 04:50:20.817672 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-20 04:50:20.817693 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-20 04:50:20.817713 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-20 04:50:20.817735 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-20 04:50:20.817759 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-20 04:50:20.817848 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-20 04:50:20.817871 | orchestrator | 2026-02-20 04:50:20.817893 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-20 04:50:20.817916 | orchestrator | 2026-02-20 04:50:20.817937 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-20 04:50:20.817960 | orchestrator | Friday 20 February 2026 04:50:11 +0000 (0:00:03.728) 0:00:07.515 ******* 2026-02-20 04:50:20.817983 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 04:50:20.818008 | orchestrator | 2026-02-20 04:50:20.818127 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-20 04:50:20.818149 | orchestrator | Friday 20 February 2026 04:50:13 +0000 (0:00:02.141) 0:00:09.657 ******* 2026-02-20 04:50:20.818174 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:20.818199 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:20.818257 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:20.818297 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:20.818318 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:20.818362 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:20.818381 | orchestrator | 2026-02-20 04:50:20.818400 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-20 04:50:20.818418 | orchestrator | Friday 20 February 2026 04:50:15 +0000 (0:00:02.097) 0:00:11.755 ******* 2026-02-20 04:50:20.818434 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:20.818452 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:20.818469 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:20.818487 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:20.818519 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:20.818545 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:20.818563 | orchestrator | 2026-02-20 04:50:20.818580 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-20 04:50:20.818598 | orchestrator | Friday 20 February 2026 04:50:18 +0000 (0:00:02.513) 0:00:14.268 ******* 2026-02-20 04:50:20.818615 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:20.818634 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:20.818662 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:28.513660 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:28.513858 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:28.513893 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:28.513935 | orchestrator | 2026-02-20 04:50:28.513949 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-20 04:50:28.513961 | orchestrator | Friday 20 February 2026 04:50:20 +0000 (0:00:02.306) 0:00:16.575 ******* 2026-02-20 04:50:28.513973 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:28.513985 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:28.514013 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:28.514087 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:28.514099 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:28.514139 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:28.514160 | orchestrator | 2026-02-20 04:50:28.514180 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-02-20 04:50:28.514201 | orchestrator | Friday 20 February 2026 04:50:23 +0000 (0:00:03.140) 0:00:19.715 ******* 2026-02-20 04:50:28.514223 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:28.514249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:28.514286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:28.514301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:28.514315 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:28.514335 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:50:28.514349 | orchestrator | 2026-02-20 04:50:28.514363 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-02-20 04:50:28.514376 | orchestrator | Friday 20 February 2026 04:50:26 +0000 (0:00:02.656) 0:00:22.372 ******* 2026-02-20 04:50:28.514390 | orchestrator | changed: [testbed-node-0] => { 2026-02-20 04:50:28.514404 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:50:28.514417 | orchestrator | } 2026-02-20 04:50:28.514429 | orchestrator | changed: [testbed-node-1] => { 2026-02-20 04:50:28.514439 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:50:28.514451 | orchestrator | } 2026-02-20 04:50:28.514461 | orchestrator | changed: [testbed-node-2] => { 2026-02-20 04:50:28.514472 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:50:28.514483 | orchestrator | } 2026-02-20 04:50:28.514498 | orchestrator | changed: [testbed-node-3] => { 2026-02-20 04:50:28.514517 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:50:28.514534 | orchestrator | } 2026-02-20 04:50:28.514554 | orchestrator | changed: [testbed-node-4] => { 2026-02-20 04:50:28.514572 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:50:28.514591 | orchestrator | } 2026-02-20 04:50:28.514610 | orchestrator | changed: [testbed-node-5] => { 2026-02-20 04:50:28.514628 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:50:28.514648 | orchestrator | } 2026-02-20 04:50:28.514660 | orchestrator | 2026-02-20 04:50:28.514671 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-20 04:50:28.514682 | orchestrator | Friday 20 February 2026 04:50:28 +0000 (0:00:01.798) 0:00:24.171 ******* 2026-02-20 04:50:28.514704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:51:00.091212 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:51:00.091306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:51:00.091317 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:51:00.091323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:51:00.091329 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:51:00.091334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:51:00.091340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:51:00.091346 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:51:00.091351 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:51:00.091380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:51:00.091393 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:51:00.091398 | orchestrator | 2026-02-20 04:51:00.091404 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-20 04:51:00.091423 | orchestrator | Friday 20 February 2026 04:50:31 +0000 (0:00:02.766) 0:00:26.937 ******* 2026-02-20 04:51:00.091428 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:51:00.091434 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:51:00.091439 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:51:00.091444 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:51:00.091449 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:51:00.091454 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:51:00.091459 | orchestrator | 2026-02-20 04:51:00.091464 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-20 04:51:00.091469 | orchestrator | Friday 20 February 2026 04:50:34 +0000 (0:00:03.711) 0:00:30.648 ******* 2026-02-20 04:51:00.091474 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-20 04:51:00.091480 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-20 04:51:00.091485 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-20 04:51:00.091506 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-20 04:51:00.091511 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-20 04:51:00.091515 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-20 04:51:00.091520 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-20 04:51:00.091525 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-20 04:51:00.091530 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-20 04:51:00.091535 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-20 04:51:00.091539 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-20 04:51:00.091554 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-20 04:51:00.091559 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-20 04:51:00.091566 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-20 04:51:00.091571 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-20 04:51:00.091576 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-20 04:51:00.091581 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-20 04:51:00.091585 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-20 04:51:00.091590 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-20 04:51:00.091595 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-20 04:51:00.091600 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-20 04:51:00.091605 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-20 04:51:00.091610 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-20 04:51:00.091615 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-20 04:51:00.091619 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-20 04:51:00.091624 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-20 04:51:00.091629 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-20 04:51:00.091634 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-20 04:51:00.091638 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-20 04:51:00.091643 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-20 04:51:00.091648 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-20 04:51:00.091653 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-20 04:51:00.091661 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-20 04:51:00.091670 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-20 04:51:00.091675 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-20 04:51:00.091680 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-20 04:51:00.091685 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-20 04:51:00.091690 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-20 04:51:00.091695 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-20 04:51:00.091700 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-20 04:51:00.091705 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-20 04:51:00.091710 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-20 04:51:00.091716 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-20 04:51:00.091730 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-20 04:51:00.091739 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-20 04:51:00.091747 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-20 04:51:00.091754 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-20 04:51:00.091824 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-20 04:53:48.466239 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-20 04:53:48.466377 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-20 04:53:48.466395 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-20 04:53:48.466406 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-20 04:53:48.466418 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-20 04:53:48.466428 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-20 04:53:48.466439 | orchestrator | 2026-02-20 04:53:48.466450 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-20 04:53:48.466460 | orchestrator | Friday 20 February 2026 04:50:57 +0000 (0:00:22.181) 0:00:52.830 ******* 2026-02-20 04:53:48.466470 | orchestrator | 2026-02-20 04:53:48.466480 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-20 04:53:48.466490 | orchestrator | Friday 20 February 2026 04:50:57 +0000 (0:00:00.437) 0:00:53.267 ******* 2026-02-20 04:53:48.466500 | orchestrator | 2026-02-20 04:53:48.466509 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-20 04:53:48.466519 | orchestrator | Friday 20 February 2026 04:50:57 +0000 (0:00:00.415) 0:00:53.683 ******* 2026-02-20 04:53:48.466529 | orchestrator | 2026-02-20 04:53:48.466539 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-20 04:53:48.466549 | orchestrator | Friday 20 February 2026 04:50:58 +0000 (0:00:00.428) 0:00:54.112 ******* 2026-02-20 04:53:48.466588 | orchestrator | 2026-02-20 04:53:48.466605 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-20 04:53:48.466621 | orchestrator | Friday 20 February 2026 04:50:58 +0000 (0:00:00.416) 0:00:54.528 ******* 2026-02-20 04:53:48.466637 | orchestrator | 2026-02-20 04:53:48.466654 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-20 04:53:48.466671 | orchestrator | Friday 20 February 2026 04:50:59 +0000 (0:00:00.475) 0:00:55.004 ******* 2026-02-20 04:53:48.466689 | orchestrator | 2026-02-20 04:53:48.466705 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-20 04:53:48.466726 | orchestrator | Friday 20 February 2026 04:51:00 +0000 (0:00:00.797) 0:00:55.801 ******* 2026-02-20 04:53:48.466750 | orchestrator | 2026-02-20 04:53:48.466793 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-02-20 04:53:48.466811 | orchestrator | changed: [testbed-node-4] 2026-02-20 04:53:48.466829 | orchestrator | changed: [testbed-node-5] 2026-02-20 04:53:48.466848 | orchestrator | changed: [testbed-node-3] 2026-02-20 04:53:48.466865 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:53:48.466882 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:53:48.466897 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:53:48.466908 | orchestrator | 2026-02-20 04:53:48.466920 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-20 04:53:48.466932 | orchestrator | 2026-02-20 04:53:48.466968 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-20 04:53:48.466991 | orchestrator | Friday 20 February 2026 04:53:12 +0000 (0:02:12.318) 0:03:08.119 ******* 2026-02-20 04:53:48.467008 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:53:48.467025 | orchestrator | 2026-02-20 04:53:48.467041 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-20 04:53:48.467059 | orchestrator | Friday 20 February 2026 04:53:14 +0000 (0:00:01.831) 0:03:09.951 ******* 2026-02-20 04:53:48.467078 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-20 04:53:48.467095 | orchestrator | 2026-02-20 04:53:48.467110 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-20 04:53:48.467122 | orchestrator | Friday 20 February 2026 04:53:16 +0000 (0:00:01.951) 0:03:11.903 ******* 2026-02-20 04:53:48.467133 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:53:48.467146 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:53:48.467157 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:53:48.467168 | orchestrator | 2026-02-20 04:53:48.467179 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-20 04:53:48.467192 | orchestrator | Friday 20 February 2026 04:53:17 +0000 (0:00:01.866) 0:03:13.770 ******* 2026-02-20 04:53:48.467203 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:53:48.467214 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:53:48.467225 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:53:48.467236 | orchestrator | 2026-02-20 04:53:48.467248 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-20 04:53:48.467259 | orchestrator | Friday 20 February 2026 04:53:19 +0000 (0:00:01.321) 0:03:15.091 ******* 2026-02-20 04:53:48.467271 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:53:48.467282 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:53:48.467293 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:53:48.467305 | orchestrator | 2026-02-20 04:53:48.467316 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-20 04:53:48.467333 | orchestrator | Friday 20 February 2026 04:53:20 +0000 (0:00:01.360) 0:03:16.452 ******* 2026-02-20 04:53:48.467350 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:53:48.467368 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:53:48.467384 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:53:48.467401 | orchestrator | 2026-02-20 04:53:48.467418 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-20 04:53:48.467451 | orchestrator | Friday 20 February 2026 04:53:22 +0000 (0:00:01.562) 0:03:18.014 ******* 2026-02-20 04:53:48.467524 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:53:48.467543 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:53:48.467559 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:53:48.467571 | orchestrator | 2026-02-20 04:53:48.467582 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-20 04:53:48.467598 | orchestrator | Friday 20 February 2026 04:53:23 +0000 (0:00:01.377) 0:03:19.392 ******* 2026-02-20 04:53:48.467611 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:53:48.467623 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:53:48.467634 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:53:48.467645 | orchestrator | 2026-02-20 04:53:48.467657 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-20 04:53:48.467668 | orchestrator | Friday 20 February 2026 04:53:24 +0000 (0:00:01.311) 0:03:20.704 ******* 2026-02-20 04:53:48.467679 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:53:48.467691 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:53:48.467702 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:53:48.467715 | orchestrator | 2026-02-20 04:53:48.467732 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-20 04:53:48.467749 | orchestrator | Friday 20 February 2026 04:53:26 +0000 (0:00:01.847) 0:03:22.551 ******* 2026-02-20 04:53:48.467801 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:53:48.467819 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:53:48.467836 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:53:48.467853 | orchestrator | 2026-02-20 04:53:48.467871 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-20 04:53:48.467889 | orchestrator | Friday 20 February 2026 04:53:28 +0000 (0:00:01.580) 0:03:24.132 ******* 2026-02-20 04:53:48.467906 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:53:48.467920 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:53:48.467931 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:53:48.467942 | orchestrator | 2026-02-20 04:53:48.467954 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-20 04:53:48.467965 | orchestrator | Friday 20 February 2026 04:53:30 +0000 (0:00:01.874) 0:03:26.006 ******* 2026-02-20 04:53:48.467976 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:53:48.467987 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:53:48.467998 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:53:48.468009 | orchestrator | 2026-02-20 04:53:48.468021 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-20 04:53:48.468032 | orchestrator | Friday 20 February 2026 04:53:31 +0000 (0:00:01.439) 0:03:27.445 ******* 2026-02-20 04:53:48.468043 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:53:48.468055 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:53:48.468066 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:53:48.468076 | orchestrator | 2026-02-20 04:53:48.468087 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-20 04:53:48.468099 | orchestrator | Friday 20 February 2026 04:53:33 +0000 (0:00:01.366) 0:03:28.812 ******* 2026-02-20 04:53:48.468110 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:53:48.468121 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:53:48.468135 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:53:48.468152 | orchestrator | 2026-02-20 04:53:48.468168 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-20 04:53:48.468184 | orchestrator | Friday 20 February 2026 04:53:34 +0000 (0:00:01.393) 0:03:30.206 ******* 2026-02-20 04:53:48.468200 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:53:48.468217 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:53:48.468234 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:53:48.468251 | orchestrator | 2026-02-20 04:53:48.468268 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-20 04:53:48.468295 | orchestrator | Friday 20 February 2026 04:53:36 +0000 (0:00:01.839) 0:03:32.045 ******* 2026-02-20 04:53:48.468326 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:53:48.468343 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:53:48.468359 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:53:48.468371 | orchestrator | 2026-02-20 04:53:48.468383 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-20 04:53:48.468394 | orchestrator | Friday 20 February 2026 04:53:37 +0000 (0:00:01.353) 0:03:33.399 ******* 2026-02-20 04:53:48.468409 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:53:48.468425 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:53:48.468450 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:53:48.468470 | orchestrator | 2026-02-20 04:53:48.468487 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-20 04:53:48.468504 | orchestrator | Friday 20 February 2026 04:53:39 +0000 (0:00:02.076) 0:03:35.476 ******* 2026-02-20 04:53:48.468520 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:53:48.468536 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:53:48.468552 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:53:48.468568 | orchestrator | 2026-02-20 04:53:48.468585 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-20 04:53:48.468601 | orchestrator | Friday 20 February 2026 04:53:41 +0000 (0:00:01.484) 0:03:36.960 ******* 2026-02-20 04:53:48.468617 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:53:48.468634 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:53:48.468649 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:53:48.468666 | orchestrator | 2026-02-20 04:53:48.468683 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-20 04:53:48.468700 | orchestrator | Friday 20 February 2026 04:53:42 +0000 (0:00:01.332) 0:03:38.293 ******* 2026-02-20 04:53:48.468717 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:53:48.468735 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:53:48.468752 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:53:48.468852 | orchestrator | 2026-02-20 04:53:48.468863 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-20 04:53:48.468875 | orchestrator | Friday 20 February 2026 04:53:44 +0000 (0:00:01.679) 0:03:39.973 ******* 2026-02-20 04:53:48.468908 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:53:54.739592 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:53:54.739695 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:53:54.739711 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:53:54.739841 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:53:54.739867 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:53:54.739879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:53:54.739890 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:53:54.739919 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:53:54.739930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:53:54.739940 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:53:54.739960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:53:54.739972 | orchestrator | 2026-02-20 04:53:54.739983 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-20 04:53:54.739995 | orchestrator | Friday 20 February 2026 04:53:48 +0000 (0:00:04.245) 0:03:44.219 ******* 2026-02-20 04:53:54.740011 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:53:54.740022 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:53:54.740033 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:53:54.740043 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:53:54.740060 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:54:09.466282 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:54:09.466410 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:54:09.466424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:54:09.466448 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:54:09.466458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:54:09.466467 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:54:09.466477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:54:09.466486 | orchestrator | 2026-02-20 04:54:09.466497 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-20 04:54:09.466509 | orchestrator | Friday 20 February 2026 04:53:54 +0000 (0:00:06.277) 0:03:50.497 ******* 2026-02-20 04:54:09.466519 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-20 04:54:09.466528 | orchestrator | 2026-02-20 04:54:09.466537 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-20 04:54:09.466552 | orchestrator | Friday 20 February 2026 04:53:56 +0000 (0:00:01.714) 0:03:52.211 ******* 2026-02-20 04:54:09.466561 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:54:09.466572 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:54:09.466594 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:54:09.466603 | orchestrator | 2026-02-20 04:54:09.466612 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-20 04:54:09.466621 | orchestrator | Friday 20 February 2026 04:53:58 +0000 (0:00:01.934) 0:03:54.146 ******* 2026-02-20 04:54:09.466630 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:54:09.466638 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:54:09.466647 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:54:09.466656 | orchestrator | 2026-02-20 04:54:09.466665 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-20 04:54:09.466674 | orchestrator | Friday 20 February 2026 04:54:01 +0000 (0:00:02.728) 0:03:56.874 ******* 2026-02-20 04:54:09.466682 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:54:09.466691 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:54:09.466700 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:54:09.466708 | orchestrator | 2026-02-20 04:54:09.466717 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-20 04:54:09.466726 | orchestrator | Friday 20 February 2026 04:54:03 +0000 (0:00:02.844) 0:03:59.719 ******* 2026-02-20 04:54:09.466739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:54:09.466790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:54:09.466814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:54:09.466829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:54:09.466845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:54:09.466871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:54:09.466896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:54:13.940687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:54:13.940813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:54:13.940834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:54:13.940839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:54:13.940843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:54:13.940860 | orchestrator | 2026-02-20 04:54:13.940866 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-20 04:54:13.940871 | orchestrator | Friday 20 February 2026 04:54:09 +0000 (0:00:05.495) 0:04:05.215 ******* 2026-02-20 04:54:13.940877 | orchestrator | changed: [testbed-node-0] => { 2026-02-20 04:54:13.940882 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:54:13.940886 | orchestrator | } 2026-02-20 04:54:13.940890 | orchestrator | changed: [testbed-node-1] => { 2026-02-20 04:54:13.940894 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:54:13.940898 | orchestrator | } 2026-02-20 04:54:13.940902 | orchestrator | changed: [testbed-node-2] => { 2026-02-20 04:54:13.940906 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:54:13.940909 | orchestrator | } 2026-02-20 04:54:13.940913 | orchestrator | 2026-02-20 04:54:13.940918 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-20 04:54:13.940922 | orchestrator | Friday 20 February 2026 04:54:10 +0000 (0:00:01.437) 0:04:06.653 ******* 2026-02-20 04:54:13.940926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:54:13.940943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:54:13.940948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:54:13.940955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:54:13.940959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:54:13.940963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:54:13.940971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:54:13.940975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:54:13.940979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-20 04:54:13.940986 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-20 04:55:43.962420 | orchestrator | 2026-02-20 04:55:43.962575 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-20 04:55:43.962597 | orchestrator | Friday 20 February 2026 04:54:13 +0000 (0:00:03.043) 0:04:09.696 ******* 2026-02-20 04:55:43.962610 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-20 04:55:43.962622 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-20 04:55:43.962633 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-20 04:55:43.962644 | orchestrator | 2026-02-20 04:55:43.962656 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-20 04:55:43.962667 | orchestrator | Friday 20 February 2026 04:54:16 +0000 (0:00:02.268) 0:04:11.964 ******* 2026-02-20 04:55:43.962679 | orchestrator | changed: [testbed-node-0] => { 2026-02-20 04:55:43.962691 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:55:43.962703 | orchestrator | } 2026-02-20 04:55:43.962714 | orchestrator | changed: [testbed-node-1] => { 2026-02-20 04:55:43.962725 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:55:43.962736 | orchestrator | } 2026-02-20 04:55:43.962798 | orchestrator | changed: [testbed-node-2] => { 2026-02-20 04:55:43.962820 | orchestrator |  "msg": "Notifying handlers" 2026-02-20 04:55:43.962840 | orchestrator | } 2026-02-20 04:55:43.962859 | orchestrator | 2026-02-20 04:55:43.962906 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-20 04:55:43.962919 | orchestrator | Friday 20 February 2026 04:54:17 +0000 (0:00:01.350) 0:04:13.315 ******* 2026-02-20 04:55:43.962930 | orchestrator | 2026-02-20 04:55:43.962943 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-20 04:55:43.962956 | orchestrator | Friday 20 February 2026 04:54:18 +0000 (0:00:00.456) 0:04:13.772 ******* 2026-02-20 04:55:43.962970 | orchestrator | 2026-02-20 04:55:43.962983 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-20 04:55:43.962996 | orchestrator | Friday 20 February 2026 04:54:18 +0000 (0:00:00.444) 0:04:14.217 ******* 2026-02-20 04:55:43.963009 | orchestrator | 2026-02-20 04:55:43.963021 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-20 04:55:43.963034 | orchestrator | Friday 20 February 2026 04:54:19 +0000 (0:00:00.774) 0:04:14.991 ******* 2026-02-20 04:55:43.963047 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:55:43.963060 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:55:43.963072 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:55:43.963085 | orchestrator | 2026-02-20 04:55:43.963098 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-20 04:55:43.963111 | orchestrator | Friday 20 February 2026 04:54:35 +0000 (0:00:16.277) 0:04:31.269 ******* 2026-02-20 04:55:43.963124 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:55:43.963137 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:55:43.963149 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:55:43.963162 | orchestrator | 2026-02-20 04:55:43.963175 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-20 04:55:43.963187 | orchestrator | Friday 20 February 2026 04:54:51 +0000 (0:00:15.805) 0:04:47.075 ******* 2026-02-20 04:55:43.963198 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-20 04:55:43.963209 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-20 04:55:43.963220 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-20 04:55:43.963231 | orchestrator | 2026-02-20 04:55:43.963242 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-20 04:55:43.963253 | orchestrator | Friday 20 February 2026 04:55:07 +0000 (0:00:15.743) 0:05:02.818 ******* 2026-02-20 04:55:43.963264 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:55:43.963276 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:55:43.963286 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:55:43.963297 | orchestrator | 2026-02-20 04:55:43.963309 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-20 04:55:43.963328 | orchestrator | Friday 20 February 2026 04:55:23 +0000 (0:00:16.677) 0:05:19.496 ******* 2026-02-20 04:55:43.963345 | orchestrator | Pausing for 5 seconds 2026-02-20 04:55:43.963363 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:55:43.963380 | orchestrator | 2026-02-20 04:55:43.963400 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-20 04:55:43.963419 | orchestrator | Friday 20 February 2026 04:55:29 +0000 (0:00:06.172) 0:05:25.668 ******* 2026-02-20 04:55:43.963437 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:55:43.963455 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:55:43.963467 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:55:43.963477 | orchestrator | 2026-02-20 04:55:43.963488 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-20 04:55:43.963500 | orchestrator | Friday 20 February 2026 04:55:31 +0000 (0:00:01.980) 0:05:27.648 ******* 2026-02-20 04:55:43.963511 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:55:43.963522 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:55:43.963533 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:55:43.963544 | orchestrator | 2026-02-20 04:55:43.963555 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-20 04:55:43.963566 | orchestrator | Friday 20 February 2026 04:55:33 +0000 (0:00:01.719) 0:05:29.368 ******* 2026-02-20 04:55:43.963577 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:55:43.963598 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:55:43.963609 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:55:43.963620 | orchestrator | 2026-02-20 04:55:43.963631 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-20 04:55:43.963642 | orchestrator | Friday 20 February 2026 04:55:35 +0000 (0:00:01.867) 0:05:31.236 ******* 2026-02-20 04:55:43.963653 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:55:43.963664 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:55:43.963675 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:55:43.963686 | orchestrator | 2026-02-20 04:55:43.963697 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-20 04:55:43.963708 | orchestrator | Friday 20 February 2026 04:55:37 +0000 (0:00:01.723) 0:05:32.960 ******* 2026-02-20 04:55:43.963719 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:55:43.963730 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:55:43.963798 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:55:43.963813 | orchestrator | 2026-02-20 04:55:43.963824 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-20 04:55:43.963856 | orchestrator | Friday 20 February 2026 04:55:38 +0000 (0:00:01.797) 0:05:34.757 ******* 2026-02-20 04:55:43.963868 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:55:43.963879 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:55:43.963890 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:55:43.963901 | orchestrator | 2026-02-20 04:55:43.963911 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-20 04:55:43.963923 | orchestrator | Friday 20 February 2026 04:55:40 +0000 (0:00:01.790) 0:05:36.547 ******* 2026-02-20 04:55:43.963934 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-20 04:55:43.963945 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-20 04:55:43.963956 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-20 04:55:43.963967 | orchestrator | 2026-02-20 04:55:43.963977 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 04:55:43.963990 | orchestrator | testbed-node-0 : ok=49  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-20 04:55:43.964010 | orchestrator | testbed-node-1 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-20 04:55:43.964022 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-20 04:55:43.964033 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 04:55:43.964044 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 04:55:43.964055 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 04:55:43.964066 | orchestrator | 2026-02-20 04:55:43.964077 | orchestrator | 2026-02-20 04:55:43.964088 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 04:55:43.964099 | orchestrator | Friday 20 February 2026 04:55:43 +0000 (0:00:02.825) 0:05:39.373 ******* 2026-02-20 04:55:43.964110 | orchestrator | =============================================================================== 2026-02-20 04:55:43.964121 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 132.32s 2026-02-20 04:55:43.964132 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.18s 2026-02-20 04:55:43.964143 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 16.68s 2026-02-20 04:55:43.964154 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 16.28s 2026-02-20 04:55:43.964165 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 15.81s 2026-02-20 04:55:43.964184 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 15.74s 2026-02-20 04:55:43.964195 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.28s 2026-02-20 04:55:43.964205 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.17s 2026-02-20 04:55:43.964216 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.50s 2026-02-20 04:55:43.964227 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.25s 2026-02-20 04:55:43.964238 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.73s 2026-02-20 04:55:43.964249 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.71s 2026-02-20 04:55:43.964260 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.14s 2026-02-20 04:55:43.964271 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.04s 2026-02-20 04:55:43.964282 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 2.97s 2026-02-20 04:55:43.964293 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.85s 2026-02-20 04:55:43.964304 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 2.83s 2026-02-20 04:55:43.964315 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.77s 2026-02-20 04:55:43.964326 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.73s 2026-02-20 04:55:43.964337 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 2.66s 2026-02-20 04:55:44.229479 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-20 04:55:44.229552 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-20 04:55:44.229561 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-02-20 04:55:44.237945 | orchestrator | + set -e 2026-02-20 04:55:44.238097 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-20 04:55:44.238119 | orchestrator | ++ export INTERACTIVE=false 2026-02-20 04:55:44.238134 | orchestrator | ++ INTERACTIVE=false 2026-02-20 04:55:44.238147 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-20 04:55:44.238157 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-20 04:55:44.238165 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-02-20 04:55:46.185534 | orchestrator | 2026-02-20 04:55:46 | INFO  | Task b9f81349-fb6e-4663-b1db-be891032d2f4 (ceph-rolling_update) was prepared for execution. 2026-02-20 04:55:46.185638 | orchestrator | 2026-02-20 04:55:46 | INFO  | It takes a moment until task b9f81349-fb6e-4663-b1db-be891032d2f4 (ceph-rolling_update) has been started and output is visible here. 2026-02-20 04:57:10.507990 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-20 04:57:10.508096 | orchestrator | 2.16.14 2026-02-20 04:57:10.508111 | orchestrator | 2026-02-20 04:57:10.508121 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-20 04:57:10.508132 | orchestrator | 2026-02-20 04:57:10.508141 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-20 04:57:10.508150 | orchestrator | Friday 20 February 2026 04:55:54 +0000 (0:00:01.502) 0:00:01.502 ******* 2026-02-20 04:57:10.508159 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-20 04:57:10.508169 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-20 04:57:10.508178 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-20 04:57:10.508187 | orchestrator | skipping: [localhost] 2026-02-20 04:57:10.508196 | orchestrator | 2026-02-20 04:57:10.508205 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-20 04:57:10.508214 | orchestrator | 2026-02-20 04:57:10.508223 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-20 04:57:10.508232 | orchestrator | Friday 20 February 2026 04:55:55 +0000 (0:00:01.606) 0:00:03.109 ******* 2026-02-20 04:57:10.508241 | orchestrator | ok: [testbed-node-0] => { 2026-02-20 04:57:10.508271 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-20 04:57:10.508281 | orchestrator | } 2026-02-20 04:57:10.508290 | orchestrator | ok: [testbed-node-1] => { 2026-02-20 04:57:10.508299 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-20 04:57:10.508308 | orchestrator | } 2026-02-20 04:57:10.508316 | orchestrator | ok: [testbed-node-2] => { 2026-02-20 04:57:10.508325 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-20 04:57:10.508334 | orchestrator | } 2026-02-20 04:57:10.508342 | orchestrator | ok: [testbed-node-3] => { 2026-02-20 04:57:10.508351 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-20 04:57:10.508360 | orchestrator | } 2026-02-20 04:57:10.508368 | orchestrator | ok: [testbed-node-4] => { 2026-02-20 04:57:10.508377 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-20 04:57:10.508385 | orchestrator | } 2026-02-20 04:57:10.508394 | orchestrator | ok: [testbed-node-5] => { 2026-02-20 04:57:10.508403 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-20 04:57:10.508412 | orchestrator | } 2026-02-20 04:57:10.508420 | orchestrator | ok: [testbed-manager] => { 2026-02-20 04:57:10.508429 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-20 04:57:10.508438 | orchestrator | } 2026-02-20 04:57:10.508446 | orchestrator | 2026-02-20 04:57:10.508455 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-20 04:57:10.508464 | orchestrator | Friday 20 February 2026 04:56:00 +0000 (0:00:04.991) 0:00:08.101 ******* 2026-02-20 04:57:10.508473 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:10.508481 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:57:10.508490 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:57:10.508498 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:57:10.508507 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:57:10.508523 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:57:10.508541 | orchestrator | ok: [testbed-manager] 2026-02-20 04:57:10.508563 | orchestrator | 2026-02-20 04:57:10.508578 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-20 04:57:10.508594 | orchestrator | Friday 20 February 2026 04:56:05 +0000 (0:00:05.193) 0:00:13.294 ******* 2026-02-20 04:57:10.508609 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 04:57:10.508623 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 04:57:10.508640 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 04:57:10.508655 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 04:57:10.508666 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 04:57:10.508676 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 04:57:10.508686 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 04:57:10.508697 | orchestrator | 2026-02-20 04:57:10.508706 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-20 04:57:10.508717 | orchestrator | Friday 20 February 2026 04:56:39 +0000 (0:00:33.813) 0:00:47.107 ******* 2026-02-20 04:57:10.508727 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:57:10.508784 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:57:10.508795 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:57:10.508815 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:57:10.508826 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:57:10.508836 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:57:10.508846 | orchestrator | ok: [testbed-manager] 2026-02-20 04:57:10.508855 | orchestrator | 2026-02-20 04:57:10.508865 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 04:57:10.508876 | orchestrator | Friday 20 February 2026 04:56:41 +0000 (0:00:02.129) 0:00:49.236 ******* 2026-02-20 04:57:10.508896 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-20 04:57:10.508907 | orchestrator | 2026-02-20 04:57:10.508915 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 04:57:10.508924 | orchestrator | Friday 20 February 2026 04:56:44 +0000 (0:00:02.589) 0:00:51.826 ******* 2026-02-20 04:57:10.508933 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:57:10.508941 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:57:10.508950 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:57:10.508958 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:57:10.508967 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:57:10.508975 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:57:10.508984 | orchestrator | ok: [testbed-manager] 2026-02-20 04:57:10.508993 | orchestrator | 2026-02-20 04:57:10.509092 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 04:57:10.509110 | orchestrator | Friday 20 February 2026 04:56:46 +0000 (0:00:02.587) 0:00:54.413 ******* 2026-02-20 04:57:10.509119 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:57:10.509128 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:57:10.509137 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:57:10.509146 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:57:10.509154 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:57:10.509163 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:57:10.509172 | orchestrator | ok: [testbed-manager] 2026-02-20 04:57:10.509181 | orchestrator | 2026-02-20 04:57:10.509189 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 04:57:10.509198 | orchestrator | Friday 20 February 2026 04:56:48 +0000 (0:00:01.908) 0:00:56.322 ******* 2026-02-20 04:57:10.509207 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:57:10.509215 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:57:10.509224 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:57:10.509233 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:57:10.509241 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:57:10.509250 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:57:10.509259 | orchestrator | ok: [testbed-manager] 2026-02-20 04:57:10.509267 | orchestrator | 2026-02-20 04:57:10.509281 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 04:57:10.509290 | orchestrator | Friday 20 February 2026 04:56:51 +0000 (0:00:02.556) 0:00:58.879 ******* 2026-02-20 04:57:10.509299 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:57:10.509307 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:57:10.509316 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:57:10.509325 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:57:10.509333 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:57:10.509342 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:57:10.509351 | orchestrator | ok: [testbed-manager] 2026-02-20 04:57:10.509360 | orchestrator | 2026-02-20 04:57:10.509368 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 04:57:10.509377 | orchestrator | Friday 20 February 2026 04:56:53 +0000 (0:00:01.891) 0:01:00.771 ******* 2026-02-20 04:57:10.509386 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:57:10.509395 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:57:10.509403 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:57:10.509412 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:57:10.509420 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:57:10.509429 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:57:10.509438 | orchestrator | ok: [testbed-manager] 2026-02-20 04:57:10.509446 | orchestrator | 2026-02-20 04:57:10.509455 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 04:57:10.509464 | orchestrator | Friday 20 February 2026 04:56:55 +0000 (0:00:02.082) 0:01:02.853 ******* 2026-02-20 04:57:10.509473 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:57:10.509481 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:57:10.509490 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:57:10.509506 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:57:10.509514 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:57:10.509523 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:57:10.509532 | orchestrator | ok: [testbed-manager] 2026-02-20 04:57:10.509540 | orchestrator | 2026-02-20 04:57:10.509549 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 04:57:10.509558 | orchestrator | Friday 20 February 2026 04:56:57 +0000 (0:00:01.867) 0:01:04.721 ******* 2026-02-20 04:57:10.509567 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:10.509576 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:57:10.509585 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:57:10.509593 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:57:10.509602 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:57:10.509611 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:57:10.509620 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:57:10.509628 | orchestrator | 2026-02-20 04:57:10.509637 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 04:57:10.509646 | orchestrator | Friday 20 February 2026 04:56:59 +0000 (0:00:02.195) 0:01:06.916 ******* 2026-02-20 04:57:10.509655 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:57:10.509663 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:57:10.509672 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:57:10.509681 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:57:10.509690 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:57:10.509698 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:57:10.509707 | orchestrator | ok: [testbed-manager] 2026-02-20 04:57:10.509716 | orchestrator | 2026-02-20 04:57:10.509725 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 04:57:10.509791 | orchestrator | Friday 20 February 2026 04:57:01 +0000 (0:00:02.307) 0:01:09.224 ******* 2026-02-20 04:57:10.509801 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 04:57:10.509810 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 04:57:10.509819 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 04:57:10.509828 | orchestrator | 2026-02-20 04:57:10.509836 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 04:57:10.509845 | orchestrator | Friday 20 February 2026 04:57:03 +0000 (0:00:01.700) 0:01:10.925 ******* 2026-02-20 04:57:10.509853 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:57:10.509862 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:57:10.509871 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:57:10.509880 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:57:10.509889 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:57:10.509897 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:57:10.509906 | orchestrator | ok: [testbed-manager] 2026-02-20 04:57:10.509915 | orchestrator | 2026-02-20 04:57:10.509923 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 04:57:10.509932 | orchestrator | Friday 20 February 2026 04:57:05 +0000 (0:00:02.161) 0:01:13.087 ******* 2026-02-20 04:57:10.509941 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 04:57:10.509949 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 04:57:10.509958 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 04:57:10.509967 | orchestrator | 2026-02-20 04:57:10.509975 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 04:57:10.509984 | orchestrator | Friday 20 February 2026 04:57:09 +0000 (0:00:03.481) 0:01:16.568 ******* 2026-02-20 04:57:10.510001 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 04:57:32.329228 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 04:57:32.329338 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 04:57:32.329352 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:32.329363 | orchestrator | 2026-02-20 04:57:32.329397 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 04:57:32.329409 | orchestrator | Friday 20 February 2026 04:57:10 +0000 (0:00:01.412) 0:01:17.981 ******* 2026-02-20 04:57:32.329420 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 04:57:32.329446 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 04:57:32.329457 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 04:57:32.329467 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:32.329476 | orchestrator | 2026-02-20 04:57:32.329486 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 04:57:32.329496 | orchestrator | Friday 20 February 2026 04:57:12 +0000 (0:00:01.841) 0:01:19.823 ******* 2026-02-20 04:57:32.329508 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:32.329521 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:32.329532 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:32.329542 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:32.329552 | orchestrator | 2026-02-20 04:57:32.329562 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 04:57:32.329572 | orchestrator | Friday 20 February 2026 04:57:13 +0000 (0:00:01.145) 0:01:20.968 ******* 2026-02-20 04:57:32.329583 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c9a9a7d69b4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 04:57:06.286715', 'end': '2026-02-20 04:57:06.349274', 'delta': '0:00:00.062559', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9a9a7d69b4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 04:57:32.329613 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'b179183cbe33', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 04:57:07.225949', 'end': '2026-02-20 04:57:07.270599', 'delta': '0:00:00.044650', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b179183cbe33'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 04:57:32.329637 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '28a82f95a8fd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 04:57:07.823226', 'end': '2026-02-20 04:57:07.865605', 'delta': '0:00:00.042379', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['28a82f95a8fd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 04:57:32.329648 | orchestrator | 2026-02-20 04:57:32.329658 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 04:57:32.329668 | orchestrator | Friday 20 February 2026 04:57:14 +0000 (0:00:01.163) 0:01:22.132 ******* 2026-02-20 04:57:32.329678 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:57:32.329689 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:57:32.329698 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:57:32.329708 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:57:32.329718 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:57:32.329752 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:57:32.329762 | orchestrator | ok: [testbed-manager] 2026-02-20 04:57:32.329772 | orchestrator | 2026-02-20 04:57:32.329783 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 04:57:32.329795 | orchestrator | Friday 20 February 2026 04:57:16 +0000 (0:00:02.061) 0:01:24.194 ******* 2026-02-20 04:57:32.329807 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:32.329818 | orchestrator | 2026-02-20 04:57:32.329829 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 04:57:32.329840 | orchestrator | Friday 20 February 2026 04:57:17 +0000 (0:00:01.220) 0:01:25.415 ******* 2026-02-20 04:57:32.329852 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:57:32.329864 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:57:32.329875 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:57:32.329886 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:57:32.329897 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:57:32.329909 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:57:32.329920 | orchestrator | ok: [testbed-manager] 2026-02-20 04:57:32.329932 | orchestrator | 2026-02-20 04:57:32.329943 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 04:57:32.329954 | orchestrator | Friday 20 February 2026 04:57:19 +0000 (0:00:02.044) 0:01:27.459 ******* 2026-02-20 04:57:32.329965 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:57:32.329977 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-20 04:57:32.329988 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-20 04:57:32.330000 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 04:57:32.330011 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-20 04:57:32.330077 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-20 04:57:32.330088 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-20 04:57:32.330100 | orchestrator | 2026-02-20 04:57:32.330111 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 04:57:32.330122 | orchestrator | Friday 20 February 2026 04:57:23 +0000 (0:00:03.392) 0:01:30.852 ******* 2026-02-20 04:57:32.330140 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:57:32.330152 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:57:32.330162 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:57:32.330171 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:57:32.330181 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:57:32.330191 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:57:32.330201 | orchestrator | ok: [testbed-manager] 2026-02-20 04:57:32.330210 | orchestrator | 2026-02-20 04:57:32.330220 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 04:57:32.330230 | orchestrator | Friday 20 February 2026 04:57:25 +0000 (0:00:02.149) 0:01:33.002 ******* 2026-02-20 04:57:32.330239 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:32.330249 | orchestrator | 2026-02-20 04:57:32.330259 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 04:57:32.330269 | orchestrator | Friday 20 February 2026 04:57:26 +0000 (0:00:01.113) 0:01:34.116 ******* 2026-02-20 04:57:32.330279 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:32.330288 | orchestrator | 2026-02-20 04:57:32.330298 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 04:57:32.330308 | orchestrator | Friday 20 February 2026 04:57:27 +0000 (0:00:01.233) 0:01:35.349 ******* 2026-02-20 04:57:32.330318 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:32.330327 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:57:32.330337 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:57:32.330347 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:57:32.330361 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:57:32.330378 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:57:32.330395 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:57:32.330414 | orchestrator | 2026-02-20 04:57:32.330430 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 04:57:32.330446 | orchestrator | Friday 20 February 2026 04:57:30 +0000 (0:00:02.424) 0:01:37.774 ******* 2026-02-20 04:57:32.330462 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:32.330479 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:57:32.330494 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:57:32.330510 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:57:32.330527 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:57:32.330543 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:57:32.330572 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:57:42.736975 | orchestrator | 2026-02-20 04:57:42.737070 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 04:57:42.737083 | orchestrator | Friday 20 February 2026 04:57:32 +0000 (0:00:02.027) 0:01:39.801 ******* 2026-02-20 04:57:42.737092 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:42.737101 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:57:42.737108 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:57:42.737116 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:57:42.737124 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:57:42.737131 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:57:42.737138 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:57:42.737146 | orchestrator | 2026-02-20 04:57:42.737154 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 04:57:42.737161 | orchestrator | Friday 20 February 2026 04:57:34 +0000 (0:00:02.011) 0:01:41.812 ******* 2026-02-20 04:57:42.737168 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:42.737176 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:57:42.737183 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:57:42.737204 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:57:42.737212 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:57:42.737219 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:57:42.737227 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:57:42.737234 | orchestrator | 2026-02-20 04:57:42.737242 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 04:57:42.737264 | orchestrator | Friday 20 February 2026 04:57:36 +0000 (0:00:01.875) 0:01:43.688 ******* 2026-02-20 04:57:42.737272 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:42.737279 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:57:42.737286 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:57:42.737293 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:57:42.737301 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:57:42.737308 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:57:42.737316 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:57:42.737323 | orchestrator | 2026-02-20 04:57:42.737331 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 04:57:42.737338 | orchestrator | Friday 20 February 2026 04:57:38 +0000 (0:00:02.173) 0:01:45.861 ******* 2026-02-20 04:57:42.737345 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:42.737353 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:57:42.737360 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:57:42.737367 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:57:42.737374 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:57:42.737382 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:57:42.737389 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:57:42.737396 | orchestrator | 2026-02-20 04:57:42.737404 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 04:57:42.737411 | orchestrator | Friday 20 February 2026 04:57:40 +0000 (0:00:01.989) 0:01:47.851 ******* 2026-02-20 04:57:42.737419 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:42.737426 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:57:42.737433 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:57:42.737441 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:57:42.737448 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:57:42.737455 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:57:42.737462 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:57:42.737469 | orchestrator | 2026-02-20 04:57:42.737477 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 04:57:42.737484 | orchestrator | Friday 20 February 2026 04:57:42 +0000 (0:00:02.105) 0:01:49.957 ******* 2026-02-20 04:57:42.737493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.737502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.737510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.737534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 04:57:42.737552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.737565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.737591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.737603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0c1d2133', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 04:57:42.737614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.737635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.869793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.869878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.869885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.869891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-29-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 04:57:42.869898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.869903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.869907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.869926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6a45b1b5', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 04:57:42.869945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.869950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.869954 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:42.869959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.869963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.869967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:42.869974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 04:57:42.869982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.032073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.032168 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:57:43.032186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.032203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3bf70d99', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part16', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part14', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part15', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part1', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 04:57:43.032242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.032257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.032287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.032308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2', 'dm-uuid-LVM-rogr1DxRDGs07Ii1eVUD20MBBAvjpXWjQ80pE2aubUFxb6wQZQZD6n80uN1wncJx'], 'uuids': ['22c82636-cfd1-4dcd-a18c-9fa46a681fb3'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '65f3eac9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx']}})  2026-02-20 04:57:43.032324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25', 'scsi-SQEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '072c6774', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 04:57:43.032337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-8hqq24-gM8B-HGCg-NRBz-x5vV-O3o1-WzQB2w', 'scsi-0QEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737', 'scsi-SQEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4d1d8767', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f']}})  2026-02-20 04:57:43.032350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.032370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.032383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 04:57:43.032404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.197123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6', 'dm-uuid-CRYPT-LUKS2-6ffa85ca31b34ffaa66b3499bdbb76c6-jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 04:57:43.197211 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:57:43.197221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.197229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f', 'dm-uuid-LVM-N05Drt313VJO8ej73Med2uhl7Q3NfCOLjUHptgJks7soG8vnX2SubfjS5bfpvFW6'], 'uuids': ['6ffa85ca-31b3-4ffa-a66b-3499bdbb76c6'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4d1d8767', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6']}})  2026-02-20 04:57:43.197237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-22rEKG-WFfn-O0Qj-YL45-EVdC-t8yv-UVN2TG', 'scsi-0QEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2', 'scsi-SQEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '65f3eac9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2']}})  2026-02-20 04:57:43.197262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.197289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd0ac2488', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 04:57:43.197296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.197302 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.197308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.197317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd', 'dm-uuid-LVM-uad2V0OpLGdzdU3eEHWgkaxlNp1nXa7g5o0axnC3QdSAlB8AkjlVWSH5X44uoXlU'], 'uuids': ['931641c7-2345-4218-a67b-b8fcf36da2a6'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '528e4f8d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU']}})  2026-02-20 04:57:43.197323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx', 'dm-uuid-CRYPT-LUKS2-22c82636cfd14dcda18c9fa46a681fb3-Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 04:57:43.197336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca', 'scsi-SQEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f09aecfd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 04:57:43.343720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UBLLvV-8QQ0-xzoQ-2mnT-QJVN-4rdp-m3Ln3u', 'scsi-0QEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6', 'scsi-SQEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6dc65ba2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef']}})  2026-02-20 04:57:43.343823 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:57:43.343833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.343841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.343847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 04:57:43.343871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.343876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T', 'dm-uuid-CRYPT-LUKS2-7f9663ba9e0d48338edb558cf7968427-GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 04:57:43.343882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.343909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef', 'dm-uuid-LVM-eZPdyLEJUA7yM8UG04kzUJPVXCx8J9VmGmbUf4SnTTCksjLBnYIEkOOeXJAJtO0T'], 'uuids': ['7f9663ba-9e0d-4833-8edb-558cf7968427'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6dc65ba2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T']}})  2026-02-20 04:57:43.343916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.343921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2YdZI3-6CQC-uR2v-c72I-I2Jf-rZdP-m7eIYn', 'scsi-0QEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289', 'scsi-SQEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '528e4f8d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd']}})  2026-02-20 04:57:43.343926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2', 'dm-uuid-LVM-M2C8S8UVlhXUKOUHVYtq26o4q1qe3SqLeECzduiKtPWWIbJabB9IOMLcCGA61b8F'], 'uuids': ['81982070-0591-4c7e-bdd5-9c8a78ca773c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '35df89d5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F']}})  2026-02-20 04:57:43.343936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.343941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8', 'scsi-SQEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '71e39072', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 04:57:43.343953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-tuZx2P-aY2U-pmBQ-dfdb-DpNW-3dKP-AWDCQ4', 'scsi-0QEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57', 'scsi-SQEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7c48204c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae']}})  2026-02-20 04:57:43.442575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '801ae611', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 04:57:43.442713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.442801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.442816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.442829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 04:57:43.442877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.442890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.442901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b', 'dm-uuid-CRYPT-LUKS2-7f5d4cd4cc71449e82aac2f81f5aced6-nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 04:57:43.442922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU', 'dm-uuid-CRYPT-LUKS2-931641c723454218a67bb8fcf36da2a6-5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 04:57:43.442934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:43.442946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae', 'dm-uuid-LVM-5ANCf8I6gapO5QY5OQELwuYZqJbHngb8nyP1EwPXTNj49fiX6eCZwsotaWM2Mv9b'], 'uuids': ['7f5d4cd4-cc71-449e-82aa-c2f81f5aced6'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7c48204c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b']}})  2026-02-20 04:57:43.442960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-4WM9YE-fUvA-MoYX-Wgng-L1JX-iz7g-FbCOv0', 'scsi-0QEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9', 'scsi-SQEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '35df89d5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2']}})  2026-02-20 04:57:43.442984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:44.708607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'be990183', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 04:57:44.708719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:44.708779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:44.708790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F', 'dm-uuid-CRYPT-LUKS2-8198207005914c7ebdd59c8a78ca773c-eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 04:57:44.708801 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:57:44.708811 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:57:44.708833 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:44.708856 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:44.708865 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:44.708880 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-52-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 04:57:44.708889 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:44.708897 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:44.708905 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:44.708925 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152', 'scsi-SQEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd7eff79e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152-part16', 'scsi-SQEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152-part14', 'scsi-SQEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152-part15', 'scsi-SQEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152-part1', 'scsi-SQEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 04:57:44.890229 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:44.890300 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 04:57:44.890307 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:57:44.890313 | orchestrator | 2026-02-20 04:57:44.890318 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 04:57:44.890323 | orchestrator | Friday 20 February 2026 04:57:44 +0000 (0:00:02.221) 0:01:52.178 ******* 2026-02-20 04:57:44.890330 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:44.890340 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:44.890347 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:44.890370 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:44.890408 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:44.890413 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:44.890417 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:44.890426 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0c1d2133', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:44.890442 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.092512 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.092615 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:57:45.092639 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.092661 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.092679 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.092717 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-29-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.092826 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.092873 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.092895 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.092929 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6a45b1b5', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.092966 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.092999 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.404444 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:57:45.404521 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.404534 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.404542 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.404562 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.404590 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.404597 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.404615 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.404629 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3bf70d99', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part16', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part14', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part15', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part1', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.404641 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.404648 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.404654 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:57:45.404666 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.536289 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2', 'dm-uuid-LVM-rogr1DxRDGs07Ii1eVUD20MBBAvjpXWjQ80pE2aubUFxb6wQZQZD6n80uN1wncJx'], 'uuids': ['22c82636-cfd1-4dcd-a18c-9fa46a681fb3'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '65f3eac9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx']}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.536398 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25', 'scsi-SQEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '072c6774', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.536451 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-8hqq24-gM8B-HGCg-NRBz-x5vV-O3o1-WzQB2w', 'scsi-0QEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737', 'scsi-SQEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4d1d8767', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f']}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.536479 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.536498 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.536526 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.536536 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.536555 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6', 'dm-uuid-CRYPT-LUKS2-6ffa85ca31b34ffaa66b3499bdbb76c6-jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.536564 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.536573 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f', 'dm-uuid-LVM-N05Drt313VJO8ej73Med2uhl7Q3NfCOLjUHptgJks7soG8vnX2SubfjS5bfpvFW6'], 'uuids': ['6ffa85ca-31b3-4ffa-a66b-3499bdbb76c6'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4d1d8767', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6']}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.536587 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.596027 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-22rEKG-WFfn-O0Qj-YL45-EVdC-t8yv-UVN2TG', 'scsi-0QEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2', 'scsi-SQEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '65f3eac9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2']}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.596173 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd', 'dm-uuid-LVM-uad2V0OpLGdzdU3eEHWgkaxlNp1nXa7g5o0axnC3QdSAlB8AkjlVWSH5X44uoXlU'], 'uuids': ['931641c7-2345-4218-a67b-b8fcf36da2a6'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '528e4f8d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU']}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.596209 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.596232 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca', 'scsi-SQEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f09aecfd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.596276 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UBLLvV-8QQ0-xzoQ-2mnT-QJVN-4rdp-m3Ln3u', 'scsi-0QEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6', 'scsi-SQEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6dc65ba2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef']}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.596308 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd0ac2488', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.596343 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.596363 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.596395 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.645158 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.645308 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.645355 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.645375 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T', 'dm-uuid-CRYPT-LUKS2-7f9663ba9e0d48338edb558cf7968427-GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.645394 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.645435 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx', 'dm-uuid-CRYPT-LUKS2-22c82636cfd14dcda18c9fa46a681fb3-Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.645457 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef', 'dm-uuid-LVM-eZPdyLEJUA7yM8UG04kzUJPVXCx8J9VmGmbUf4SnTTCksjLBnYIEkOOeXJAJtO0T'], 'uuids': ['7f9663ba-9e0d-4833-8edb-558cf7968427'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6dc65ba2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T']}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.645500 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2YdZI3-6CQC-uR2v-c72I-I2Jf-rZdP-m7eIYn', 'scsi-0QEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289', 'scsi-SQEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '528e4f8d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd']}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.645526 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.645560 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '801ae611', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.916768 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.916873 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:57:45.916908 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.916924 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU', 'dm-uuid-CRYPT-LUKS2-931641c723454218a67bb8fcf36da2a6-5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.916936 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.916949 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2', 'dm-uuid-LVM-M2C8S8UVlhXUKOUHVYtq26o4q1qe3SqLeECzduiKtPWWIbJabB9IOMLcCGA61b8F'], 'uuids': ['81982070-0591-4c7e-bdd5-9c8a78ca773c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '35df89d5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F']}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.917004 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8', 'scsi-SQEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '71e39072', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.917024 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-tuZx2P-aY2U-pmBQ-dfdb-DpNW-3dKP-AWDCQ4', 'scsi-0QEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57', 'scsi-SQEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7c48204c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae']}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.917040 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.917052 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.917064 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:57:45.917076 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.917095 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.917115 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b', 'dm-uuid-CRYPT-LUKS2-7f5d4cd4cc71449e82aac2f81f5aced6-nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.982848 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.982953 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.982968 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.982979 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.983009 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae', 'dm-uuid-LVM-5ANCf8I6gapO5QY5OQELwuYZqJbHngb8nyP1EwPXTNj49fiX6eCZwsotaWM2Mv9b'], 'uuids': ['7f5d4cd4-cc71-449e-82aa-c2f81f5aced6'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7c48204c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b']}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.983037 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-52-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.983056 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.983068 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-4WM9YE-fUvA-MoYX-Wgng-L1JX-iz7g-FbCOv0', 'scsi-0QEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9', 'scsi-SQEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '35df89d5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2']}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.983082 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.983093 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.983137 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:57:45.983166 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'be990183', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:58:00.773541 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152', 'scsi-SQEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd7eff79e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152-part16', 'scsi-SQEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152-part14', 'scsi-SQEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152-part15', 'scsi-SQEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152-part1', 'scsi-SQEMU_QEMU_HARDDISK_d7eff79e-3548-4942-90ff-36a0d3bc2152-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:58:00.773675 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:58:00.773691 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:58:00.773714 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:58:00.773724 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:58:00.773763 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:58:00.773782 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F', 'dm-uuid-CRYPT-LUKS2-8198207005914c7ebdd59c8a78ca773c-eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 04:58:00.773791 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:58:00.773800 | orchestrator | 2026-02-20 04:58:00.773809 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 04:58:00.773818 | orchestrator | Friday 20 February 2026 04:57:47 +0000 (0:00:02.446) 0:01:54.625 ******* 2026-02-20 04:58:00.773826 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:58:00.773835 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:58:00.773843 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:58:00.773851 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:58:00.773859 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:58:00.773866 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:58:00.773874 | orchestrator | ok: [testbed-manager] 2026-02-20 04:58:00.773882 | orchestrator | 2026-02-20 04:58:00.773890 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 04:58:00.773898 | orchestrator | Friday 20 February 2026 04:57:49 +0000 (0:00:02.540) 0:01:57.166 ******* 2026-02-20 04:58:00.773906 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:58:00.773914 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:58:00.773922 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:58:00.773930 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:58:00.773937 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:58:00.773945 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:58:00.773953 | orchestrator | ok: [testbed-manager] 2026-02-20 04:58:00.773961 | orchestrator | 2026-02-20 04:58:00.773969 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 04:58:00.773977 | orchestrator | Friday 20 February 2026 04:57:51 +0000 (0:00:01.986) 0:01:59.153 ******* 2026-02-20 04:58:00.773985 | orchestrator | ok: [testbed-node-0] 2026-02-20 04:58:00.773993 | orchestrator | ok: [testbed-node-1] 2026-02-20 04:58:00.774000 | orchestrator | ok: [testbed-node-2] 2026-02-20 04:58:00.774008 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:58:00.774060 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:58:00.774069 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:58:00.774077 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:58:00.774086 | orchestrator | 2026-02-20 04:58:00.774096 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 04:58:00.774105 | orchestrator | Friday 20 February 2026 04:57:54 +0000 (0:00:02.454) 0:02:01.608 ******* 2026-02-20 04:58:00.774115 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:58:00.774124 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:58:00.774142 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:58:00.774151 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:58:00.774160 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:58:00.774169 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:58:00.774178 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:58:00.774188 | orchestrator | 2026-02-20 04:58:00.774198 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 04:58:00.774214 | orchestrator | Friday 20 February 2026 04:57:56 +0000 (0:00:01.901) 0:02:03.509 ******* 2026-02-20 04:58:00.774286 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:58:00.774302 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:58:00.774312 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:58:00.774321 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:58:00.774330 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:58:00.774340 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:58:00.774350 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-20 04:58:00.774360 | orchestrator | 2026-02-20 04:58:00.774369 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 04:58:00.774379 | orchestrator | Friday 20 February 2026 04:57:58 +0000 (0:00:02.702) 0:02:06.212 ******* 2026-02-20 04:58:00.774388 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:58:00.774398 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:58:00.774407 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:58:00.774416 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:58:00.774426 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:58:00.774436 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:58:00.774444 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:58:00.774452 | orchestrator | 2026-02-20 04:58:00.774460 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 04:58:40.304626 | orchestrator | Friday 20 February 2026 04:58:00 +0000 (0:00:02.036) 0:02:08.248 ******* 2026-02-20 04:58:40.304709 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 04:58:40.304720 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-20 04:58:40.304774 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-20 04:58:40.304782 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-20 04:58:40.304791 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-20 04:58:40.304799 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-20 04:58:40.304806 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-20 04:58:40.304811 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-20 04:58:40.304816 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-20 04:58:40.304821 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-20 04:58:40.304826 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-20 04:58:40.304832 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-20 04:58:40.304836 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-20 04:58:40.304841 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-20 04:58:40.304846 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-20 04:58:40.304850 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-20 04:58:40.304855 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-20 04:58:40.304860 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-20 04:58:40.304865 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-20 04:58:40.304869 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-20 04:58:40.304874 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-20 04:58:40.304879 | orchestrator | 2026-02-20 04:58:40.304884 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 04:58:40.304889 | orchestrator | Friday 20 February 2026 04:58:04 +0000 (0:00:03.303) 0:02:11.552 ******* 2026-02-20 04:58:40.304895 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 04:58:40.304900 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 04:58:40.304905 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 04:58:40.304910 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:58:40.304915 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-20 04:58:40.304936 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-20 04:58:40.304941 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-20 04:58:40.304946 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:58:40.304950 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-20 04:58:40.304955 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-20 04:58:40.304959 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-20 04:58:40.304964 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:58:40.304969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-20 04:58:40.304973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-20 04:58:40.304978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-20 04:58:40.304982 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:58:40.304987 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-20 04:58:40.304991 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-20 04:58:40.304996 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-20 04:58:40.305014 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:58:40.305020 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-20 04:58:40.305024 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-20 04:58:40.305029 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-20 04:58:40.305033 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:58:40.305038 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-20 04:58:40.305043 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-20 04:58:40.305048 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-20 04:58:40.305052 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:58:40.305057 | orchestrator | 2026-02-20 04:58:40.305062 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 04:58:40.305066 | orchestrator | Friday 20 February 2026 04:58:06 +0000 (0:00:02.374) 0:02:13.927 ******* 2026-02-20 04:58:40.305071 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:58:40.305076 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:58:40.305080 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:58:40.305085 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:58:40.305092 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 04:58:40.305097 | orchestrator | 2026-02-20 04:58:40.305103 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 04:58:40.305109 | orchestrator | Friday 20 February 2026 04:58:08 +0000 (0:00:02.351) 0:02:16.279 ******* 2026-02-20 04:58:40.305114 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:58:40.305120 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:58:40.305125 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:58:40.305130 | orchestrator | 2026-02-20 04:58:40.305135 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 04:58:40.305140 | orchestrator | Friday 20 February 2026 04:58:10 +0000 (0:00:01.535) 0:02:17.815 ******* 2026-02-20 04:58:40.305145 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:58:40.305151 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:58:40.305167 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:58:40.305172 | orchestrator | 2026-02-20 04:58:40.305177 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 04:58:40.305183 | orchestrator | Friday 20 February 2026 04:58:11 +0000 (0:00:01.325) 0:02:19.141 ******* 2026-02-20 04:58:40.305188 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:58:40.305193 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:58:40.305198 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:58:40.305209 | orchestrator | 2026-02-20 04:58:40.305215 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 04:58:40.305221 | orchestrator | Friday 20 February 2026 04:58:12 +0000 (0:00:01.300) 0:02:20.441 ******* 2026-02-20 04:58:40.305229 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:58:40.305238 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:58:40.305244 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:58:40.305250 | orchestrator | 2026-02-20 04:58:40.305256 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 04:58:40.305262 | orchestrator | Friday 20 February 2026 04:58:14 +0000 (0:00:01.407) 0:02:21.849 ******* 2026-02-20 04:58:40.305279 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 04:58:40.305285 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 04:58:40.305298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 04:58:40.305303 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:58:40.305308 | orchestrator | 2026-02-20 04:58:40.305313 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 04:58:40.305319 | orchestrator | Friday 20 February 2026 04:58:16 +0000 (0:00:01.657) 0:02:23.506 ******* 2026-02-20 04:58:40.305324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 04:58:40.305329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 04:58:40.305334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 04:58:40.305339 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:58:40.305344 | orchestrator | 2026-02-20 04:58:40.305350 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 04:58:40.305355 | orchestrator | Friday 20 February 2026 04:58:17 +0000 (0:00:01.611) 0:02:25.117 ******* 2026-02-20 04:58:40.305360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 04:58:40.305365 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 04:58:40.305370 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 04:58:40.305375 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:58:40.305380 | orchestrator | 2026-02-20 04:58:40.305386 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 04:58:40.305391 | orchestrator | Friday 20 February 2026 04:58:19 +0000 (0:00:01.688) 0:02:26.805 ******* 2026-02-20 04:58:40.305396 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:58:40.305401 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:58:40.305406 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:58:40.305411 | orchestrator | 2026-02-20 04:58:40.305417 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 04:58:40.305422 | orchestrator | Friday 20 February 2026 04:58:20 +0000 (0:00:01.380) 0:02:28.186 ******* 2026-02-20 04:58:40.305427 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-20 04:58:40.305432 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-20 04:58:40.305437 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-20 04:58:40.305445 | orchestrator | 2026-02-20 04:58:40.305453 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 04:58:40.305459 | orchestrator | Friday 20 February 2026 04:58:22 +0000 (0:00:01.512) 0:02:29.699 ******* 2026-02-20 04:58:40.305464 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 04:58:40.305473 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 04:58:40.305480 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 04:58:40.305485 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 04:58:40.305490 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 04:58:40.305495 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 04:58:40.305504 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 04:58:40.305510 | orchestrator | 2026-02-20 04:58:40.305515 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 04:58:40.305520 | orchestrator | Friday 20 February 2026 04:58:24 +0000 (0:00:02.036) 0:02:31.735 ******* 2026-02-20 04:58:40.305525 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 04:58:40.305530 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 04:58:40.305535 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 04:58:40.305541 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 04:58:40.305546 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 04:58:40.305551 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 04:58:40.305556 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 04:58:40.305561 | orchestrator | 2026-02-20 04:58:40.305567 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-20 04:58:40.305572 | orchestrator | Friday 20 February 2026 04:58:27 +0000 (0:00:02.841) 0:02:34.577 ******* 2026-02-20 04:58:40.305577 | orchestrator | changed: [testbed-node-4] 2026-02-20 04:58:40.305582 | orchestrator | changed: [testbed-node-3] 2026-02-20 04:58:40.305588 | orchestrator | changed: [testbed-node-5] 2026-02-20 04:58:40.305596 | orchestrator | changed: [testbed-manager] 2026-02-20 04:59:15.386161 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:59:15.386257 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:59:15.386268 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:59:15.386276 | orchestrator | 2026-02-20 04:59:15.386285 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-20 04:59:15.386294 | orchestrator | Friday 20 February 2026 04:58:40 +0000 (0:00:13.196) 0:02:47.774 ******* 2026-02-20 04:59:15.386302 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:15.386310 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:15.386317 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:15.386325 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:15.386332 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:15.386339 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:15.386359 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:15.386367 | orchestrator | 2026-02-20 04:59:15.386382 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-20 04:59:15.386390 | orchestrator | Friday 20 February 2026 04:58:42 +0000 (0:00:02.023) 0:02:49.798 ******* 2026-02-20 04:59:15.386409 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:15.386416 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:15.386424 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:15.386432 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:15.386447 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:15.386454 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:15.386462 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:15.386469 | orchestrator | 2026-02-20 04:59:15.386476 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-20 04:59:15.386484 | orchestrator | Friday 20 February 2026 04:58:44 +0000 (0:00:01.865) 0:02:51.663 ******* 2026-02-20 04:59:15.386491 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:15.386498 | orchestrator | changed: [testbed-node-1] 2026-02-20 04:59:15.386505 | orchestrator | changed: [testbed-node-2] 2026-02-20 04:59:15.386513 | orchestrator | changed: [testbed-node-0] 2026-02-20 04:59:15.386520 | orchestrator | changed: [testbed-node-3] 2026-02-20 04:59:15.386527 | orchestrator | changed: [testbed-node-4] 2026-02-20 04:59:15.386535 | orchestrator | changed: [testbed-node-5] 2026-02-20 04:59:15.386542 | orchestrator | 2026-02-20 04:59:15.386549 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-20 04:59:15.386575 | orchestrator | Friday 20 February 2026 04:58:47 +0000 (0:00:02.990) 0:02:54.654 ******* 2026-02-20 04:59:15.386584 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-20 04:59:15.386593 | orchestrator | 2026-02-20 04:59:15.386601 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-20 04:59:15.386608 | orchestrator | Friday 20 February 2026 04:58:49 +0000 (0:00:02.792) 0:02:57.447 ******* 2026-02-20 04:59:15.386615 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:15.386622 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:15.386630 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:15.386637 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:15.386645 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:15.386652 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:15.386659 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:15.386666 | orchestrator | 2026-02-20 04:59:15.386673 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-20 04:59:15.386681 | orchestrator | Friday 20 February 2026 04:58:51 +0000 (0:00:01.947) 0:02:59.395 ******* 2026-02-20 04:59:15.386688 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:15.386695 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:15.386702 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:15.386709 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:15.386718 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:15.386801 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:15.386814 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:15.386825 | orchestrator | 2026-02-20 04:59:15.386833 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-20 04:59:15.386840 | orchestrator | Friday 20 February 2026 04:58:53 +0000 (0:00:01.990) 0:03:01.385 ******* 2026-02-20 04:59:15.386847 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:15.386855 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:15.386862 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:15.386869 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:15.386876 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:15.386883 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:15.386891 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:15.386898 | orchestrator | 2026-02-20 04:59:15.386905 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-20 04:59:15.386913 | orchestrator | Friday 20 February 2026 04:58:55 +0000 (0:00:01.934) 0:03:03.320 ******* 2026-02-20 04:59:15.386920 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:15.386927 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:15.386935 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:15.386942 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:15.386950 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:15.386957 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:15.386964 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:15.386971 | orchestrator | 2026-02-20 04:59:15.386979 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-20 04:59:15.386986 | orchestrator | Friday 20 February 2026 04:58:58 +0000 (0:00:02.205) 0:03:05.525 ******* 2026-02-20 04:59:15.386993 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:15.387000 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:15.387007 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:15.387015 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:15.387022 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:15.387029 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:15.387036 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:15.387044 | orchestrator | 2026-02-20 04:59:15.387051 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-20 04:59:15.387071 | orchestrator | Friday 20 February 2026 04:58:59 +0000 (0:00:01.856) 0:03:07.381 ******* 2026-02-20 04:59:15.387093 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:15.387100 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:15.387108 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:15.387115 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:15.387123 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:15.387130 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:15.387137 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:15.387145 | orchestrator | 2026-02-20 04:59:15.387152 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-20 04:59:15.387160 | orchestrator | Friday 20 February 2026 04:59:02 +0000 (0:00:02.168) 0:03:09.550 ******* 2026-02-20 04:59:15.387167 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:15.387174 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:15.387182 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:15.387189 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:15.387196 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:15.387203 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:15.387211 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:15.387218 | orchestrator | 2026-02-20 04:59:15.387226 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-20 04:59:15.387233 | orchestrator | Friday 20 February 2026 04:59:04 +0000 (0:00:02.053) 0:03:11.604 ******* 2026-02-20 04:59:15.387240 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:15.387248 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:15.387255 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:15.387262 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:15.387270 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:15.387277 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:15.387284 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:15.387292 | orchestrator | 2026-02-20 04:59:15.387299 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-20 04:59:15.387307 | orchestrator | Friday 20 February 2026 04:59:06 +0000 (0:00:02.211) 0:03:13.815 ******* 2026-02-20 04:59:15.387314 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:15.387322 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:15.387329 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:15.387336 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:15.387343 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:15.387351 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:15.387358 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:15.387365 | orchestrator | 2026-02-20 04:59:15.387373 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-20 04:59:15.387380 | orchestrator | Friday 20 February 2026 04:59:08 +0000 (0:00:02.000) 0:03:15.816 ******* 2026-02-20 04:59:15.387388 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:15.387395 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:15.387402 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:15.387410 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:15.387417 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:15.387424 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:15.387431 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:15.387439 | orchestrator | 2026-02-20 04:59:15.387446 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-20 04:59:15.387454 | orchestrator | Friday 20 February 2026 04:59:10 +0000 (0:00:02.083) 0:03:17.899 ******* 2026-02-20 04:59:15.387461 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:15.387468 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:15.387475 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:15.387483 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:15.387490 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:15.387503 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:15.387510 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:15.387518 | orchestrator | 2026-02-20 04:59:15.387525 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-20 04:59:15.387537 | orchestrator | Friday 20 February 2026 04:59:12 +0000 (0:00:02.195) 0:03:20.095 ******* 2026-02-20 04:59:15.387544 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:15.387551 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:15.387559 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:15.387566 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:15.387575 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:15.387588 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:15.387599 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:15.387611 | orchestrator | 2026-02-20 04:59:15.387622 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-20 04:59:15.387634 | orchestrator | Friday 20 February 2026 04:59:14 +0000 (0:00:01.839) 0:03:21.934 ******* 2026-02-20 04:59:15.387646 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:15.387657 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:15.387669 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:15.387683 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 04:59:15.387697 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 04:59:15.387709 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:15.387742 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 04:59:15.387751 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 04:59:15.387759 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:15.387766 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 04:59:15.387781 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 04:59:42.881341 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:42.881498 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:42.881535 | orchestrator | 2026-02-20 04:59:42.881557 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-20 04:59:42.881577 | orchestrator | Friday 20 February 2026 04:59:16 +0000 (0:00:02.071) 0:03:24.006 ******* 2026-02-20 04:59:42.881594 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:42.881609 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:42.881625 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:42.881642 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:42.881659 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:42.881675 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:42.881692 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:42.881710 | orchestrator | 2026-02-20 04:59:42.881808 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-20 04:59:42.881828 | orchestrator | Friday 20 February 2026 04:59:18 +0000 (0:00:01.968) 0:03:25.974 ******* 2026-02-20 04:59:42.881845 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:42.881864 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:42.881882 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:42.881901 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:42.881920 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:42.881938 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:42.881990 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:42.882009 | orchestrator | 2026-02-20 04:59:42.882113 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-20 04:59:42.882133 | orchestrator | Friday 20 February 2026 04:59:20 +0000 (0:00:02.105) 0:03:28.080 ******* 2026-02-20 04:59:42.882153 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:42.882171 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:42.882190 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:42.882207 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:42.882226 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:42.882237 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:42.882248 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:42.882259 | orchestrator | 2026-02-20 04:59:42.882270 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-20 04:59:42.882281 | orchestrator | Friday 20 February 2026 04:59:22 +0000 (0:00:01.881) 0:03:29.962 ******* 2026-02-20 04:59:42.882292 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:42.882303 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:42.882313 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:42.882324 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:42.882336 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:42.882347 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:42.882358 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:42.882368 | orchestrator | 2026-02-20 04:59:42.882379 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-20 04:59:42.882391 | orchestrator | Friday 20 February 2026 04:59:24 +0000 (0:00:02.217) 0:03:32.180 ******* 2026-02-20 04:59:42.882401 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:42.882412 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:42.882423 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:42.882434 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:42.882444 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:42.882455 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:42.882466 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:42.882477 | orchestrator | 2026-02-20 04:59:42.882487 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-20 04:59:42.882498 | orchestrator | Friday 20 February 2026 04:59:26 +0000 (0:00:02.028) 0:03:34.209 ******* 2026-02-20 04:59:42.882509 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:42.882520 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:42.882546 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:42.882558 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:42.882569 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:42.882579 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:42.882590 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:42.882601 | orchestrator | 2026-02-20 04:59:42.882612 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-20 04:59:42.882624 | orchestrator | Friday 20 February 2026 04:59:28 +0000 (0:00:01.847) 0:03:36.057 ******* 2026-02-20 04:59:42.882642 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:42.882667 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:42.882691 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:42.882708 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:42.882800 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 04:59:42.882819 | orchestrator | 2026-02-20 04:59:42.882837 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-20 04:59:42.882855 | orchestrator | Friday 20 February 2026 04:59:31 +0000 (0:00:02.469) 0:03:38.527 ******* 2026-02-20 04:59:42.882872 | orchestrator | ok: [testbed-node-3] 2026-02-20 04:59:42.882891 | orchestrator | ok: [testbed-node-4] 2026-02-20 04:59:42.882907 | orchestrator | ok: [testbed-node-5] 2026-02-20 04:59:42.882942 | orchestrator | 2026-02-20 04:59:42.882962 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-20 04:59:42.882979 | orchestrator | Friday 20 February 2026 04:59:32 +0000 (0:00:01.419) 0:03:39.946 ******* 2026-02-20 04:59:42.882998 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 04:59:42.883020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 04:59:42.883039 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:42.883074 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 04:59:42.883111 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 04:59:42.883123 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:42.883135 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 04:59:42.883145 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 04:59:42.883156 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:42.883167 | orchestrator | 2026-02-20 04:59:42.883178 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-20 04:59:42.883189 | orchestrator | Friday 20 February 2026 04:59:33 +0000 (0:00:01.492) 0:03:41.439 ******* 2026-02-20 04:59:42.883203 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:42.883217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:42.883228 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:42.883239 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:42.883251 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:42.883262 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:42.883273 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:42.883293 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:42.883313 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:42.883324 | orchestrator | 2026-02-20 04:59:42.883336 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-20 04:59:42.883354 | orchestrator | Friday 20 February 2026 04:59:35 +0000 (0:00:01.608) 0:03:43.048 ******* 2026-02-20 04:59:42.883381 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:42.883403 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:42.883420 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:42.883438 | orchestrator | 2026-02-20 04:59:42.883456 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-20 04:59:42.883474 | orchestrator | Friday 20 February 2026 04:59:36 +0000 (0:00:01.298) 0:03:44.346 ******* 2026-02-20 04:59:42.883492 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:42.883510 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:42.883530 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:42.883548 | orchestrator | 2026-02-20 04:59:42.883566 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-20 04:59:42.883578 | orchestrator | Friday 20 February 2026 04:59:38 +0000 (0:00:01.322) 0:03:45.668 ******* 2026-02-20 04:59:42.883589 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:42.883604 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:42.883623 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:42.883638 | orchestrator | 2026-02-20 04:59:42.883664 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-20 04:59:42.883683 | orchestrator | Friday 20 February 2026 04:59:39 +0000 (0:00:01.301) 0:03:46.970 ******* 2026-02-20 04:59:42.883701 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:42.883745 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:42.883764 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:42.883782 | orchestrator | 2026-02-20 04:59:42.883798 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-20 04:59:42.883815 | orchestrator | Friday 20 February 2026 04:59:40 +0000 (0:00:01.325) 0:03:48.296 ******* 2026-02-20 04:59:42.883847 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'}) 2026-02-20 04:59:44.290171 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'}) 2026-02-20 04:59:44.290253 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'}) 2026-02-20 04:59:44.290263 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'}) 2026-02-20 04:59:44.290270 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'}) 2026-02-20 04:59:44.290277 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'}) 2026-02-20 04:59:44.290285 | orchestrator | 2026-02-20 04:59:44.290293 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-20 04:59:44.290301 | orchestrator | Friday 20 February 2026 04:59:42 +0000 (0:00:02.051) 0:03:50.348 ******* 2026-02-20 04:59:44.290313 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f/osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1771555969.534829, 'mtime': 1771555969.5318289, 'ctime': 1771555969.5318289, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f/osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:44.290354 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2/osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1771555988.8441598, 'mtime': 1771555988.83916, 'ctime': 1771555988.83916, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2/osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:44.290364 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:44.290388 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef/osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 952, 'dev': 6, 'nlink': 1, 'atime': 1771555969.57341, 'mtime': 1771555969.56641, 'ctime': 1771555969.56641, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef/osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:44.290397 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd/osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 962, 'dev': 6, 'nlink': 1, 'atime': 1771555990.0537603, 'mtime': 1771555990.0467603, 'ctime': 1771555990.0467603, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd/osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:44.290409 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:44.290420 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae/osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 954, 'dev': 6, 'nlink': 1, 'atime': 1771555969.5097914, 'mtime': 1771555969.5027914, 'ctime': 1771555969.5027914, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae/osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:44.290434 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2/osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 964, 'dev': 6, 'nlink': 1, 'atime': 1771555988.234095, 'mtime': 1771555988.230095, 'ctime': 1771555988.230095, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2/osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:54.560277 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:54.560405 | orchestrator | 2026-02-20 04:59:54.560426 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-20 04:59:54.560443 | orchestrator | Friday 20 February 2026 04:59:44 +0000 (0:00:01.418) 0:03:51.767 ******* 2026-02-20 04:59:54.560459 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 04:59:54.560471 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 04:59:54.560499 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:54.560508 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 04:59:54.560519 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 04:59:54.560533 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:54.560546 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 04:59:54.560559 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 04:59:54.560573 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:54.560586 | orchestrator | 2026-02-20 04:59:54.560600 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-20 04:59:54.560615 | orchestrator | Friday 20 February 2026 04:59:45 +0000 (0:00:01.375) 0:03:53.142 ******* 2026-02-20 04:59:54.560649 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:54.560666 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:54.560679 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:54.560687 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:54.560696 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:54.560704 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:54.560712 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:54.560755 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:54.560767 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:54.560775 | orchestrator | 2026-02-20 04:59:54.560783 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-20 04:59:54.560792 | orchestrator | Friday 20 February 2026 04:59:47 +0000 (0:00:01.417) 0:03:54.560 ******* 2026-02-20 04:59:54.560800 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'})  2026-02-20 04:59:54.560818 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'})  2026-02-20 04:59:54.560827 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:54.560857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'})  2026-02-20 04:59:54.560867 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'})  2026-02-20 04:59:54.560876 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:54.560886 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'})  2026-02-20 04:59:54.560895 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'})  2026-02-20 04:59:54.560904 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:54.560913 | orchestrator | 2026-02-20 04:59:54.560922 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-20 04:59:54.560932 | orchestrator | Friday 20 February 2026 04:59:48 +0000 (0:00:01.574) 0:03:56.134 ******* 2026-02-20 04:59:54.560942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-59fbb122-dcd4-5ddb-8fde-378adfe4b14f', 'data_vg': 'ceph-59fbb122-dcd4-5ddb-8fde-378adfe4b14f'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:54.560951 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-dc3a4123-87de-5eee-bc1c-01eb52a96fe2', 'data_vg': 'ceph-dc3a4123-87de-5eee-bc1c-01eb52a96fe2'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:54.560961 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:54.560975 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-ad1d47ce-3300-5f5f-a456-60212d7294ef', 'data_vg': 'ceph-ad1d47ce-3300-5f5f-a456-60212d7294ef'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:54.560985 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd', 'data_vg': 'ceph-5fdd3cdc-a96e-5423-81ac-d20dc4add6fd'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:54.560995 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:54.561004 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae', 'data_vg': 'ceph-9fd87d74-f7c4-5aa7-94da-ba8f1e0708ae'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:54.561013 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-5fe77357-4c85-56ab-aabd-7cb5a18434f2', 'data_vg': 'ceph-5fe77357-4c85-56ab-aabd-7cb5a18434f2'}, 'ansible_loop_var': 'item'})  2026-02-20 04:59:54.561023 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:54.561032 | orchestrator | 2026-02-20 04:59:54.561041 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-20 04:59:54.561051 | orchestrator | Friday 20 February 2026 04:59:50 +0000 (0:00:01.421) 0:03:57.556 ******* 2026-02-20 04:59:54.561065 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:54.561075 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:54.561084 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:54.561094 | orchestrator | skipping: [testbed-node-3] 2026-02-20 04:59:54.561104 | orchestrator | skipping: [testbed-node-4] 2026-02-20 04:59:54.561112 | orchestrator | skipping: [testbed-node-5] 2026-02-20 04:59:54.561121 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:54.561130 | orchestrator | 2026-02-20 04:59:54.561140 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-20 04:59:54.561149 | orchestrator | Friday 20 February 2026 04:59:51 +0000 (0:00:01.846) 0:03:59.402 ******* 2026-02-20 04:59:54.561158 | orchestrator | skipping: [testbed-node-0] 2026-02-20 04:59:54.561168 | orchestrator | skipping: [testbed-node-1] 2026-02-20 04:59:54.561177 | orchestrator | skipping: [testbed-node-2] 2026-02-20 04:59:54.561185 | orchestrator | skipping: [testbed-manager] 2026-02-20 04:59:54.561193 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 04:59:54.561202 | orchestrator | 2026-02-20 04:59:54.561210 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-20 04:59:54.561218 | orchestrator | Friday 20 February 2026 04:59:54 +0000 (0:00:02.496) 0:04:01.898 ******* 2026-02-20 04:59:54.561232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.481681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.481834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.481850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.481859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.481868 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:05.481878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.481887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.481895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.481904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.481912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.481920 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:05.481928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.481953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.481961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.481970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.481978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.481986 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:05.482057 | orchestrator | 2026-02-20 05:00:05.482070 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-20 05:00:05.482081 | orchestrator | Friday 20 February 2026 04:59:55 +0000 (0:00:01.410) 0:04:03.309 ******* 2026-02-20 05:00:05.482090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482136 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:05.482145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482200 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:05.482210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482271 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:05.482280 | orchestrator | 2026-02-20 05:00:05.482289 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-20 05:00:05.482298 | orchestrator | Friday 20 February 2026 04:59:57 +0000 (0:00:01.589) 0:04:04.898 ******* 2026-02-20 05:00:05.482307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482351 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:05.482367 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482417 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:05.482427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:00:05.482493 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:05.482502 | orchestrator | 2026-02-20 05:00:05.482511 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-20 05:00:05.482520 | orchestrator | Friday 20 February 2026 04:59:58 +0000 (0:00:01.444) 0:04:06.343 ******* 2026-02-20 05:00:05.482528 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:05.482537 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:05.482546 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:05.482555 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:05.482563 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:05.482572 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:05.482581 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:05.482590 | orchestrator | 2026-02-20 05:00:05.482598 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-20 05:00:05.482607 | orchestrator | Friday 20 February 2026 05:00:00 +0000 (0:00:02.044) 0:04:08.387 ******* 2026-02-20 05:00:05.482616 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:05.482625 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:05.482633 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:05.482642 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:05.482651 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:05.482660 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:05.482668 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:05.482677 | orchestrator | 2026-02-20 05:00:05.482686 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-20 05:00:05.482695 | orchestrator | Friday 20 February 2026 05:00:03 +0000 (0:00:02.259) 0:04:10.647 ******* 2026-02-20 05:00:05.482703 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:05.482712 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:05.482744 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:05.482754 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:05.482763 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:05.482771 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:05.482780 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:05.482789 | orchestrator | 2026-02-20 05:00:05.482798 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-20 05:00:05.482813 | orchestrator | Friday 20 February 2026 05:00:05 +0000 (0:00:02.073) 0:04:12.721 ******* 2026-02-20 05:00:05.482829 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:16.096600 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:16.096694 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:16.096702 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:16.096707 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:16.096712 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:16.096750 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:16.096757 | orchestrator | 2026-02-20 05:00:16.096764 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-20 05:00:16.096793 | orchestrator | Friday 20 February 2026 05:00:07 +0000 (0:00:01.848) 0:04:14.569 ******* 2026-02-20 05:00:16.096799 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:16.096805 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:16.096810 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:16.096815 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:16.096821 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:16.096825 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:16.096831 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:16.096836 | orchestrator | 2026-02-20 05:00:16.096841 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-20 05:00:16.096846 | orchestrator | Friday 20 February 2026 05:00:09 +0000 (0:00:02.067) 0:04:16.636 ******* 2026-02-20 05:00:16.096852 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:16.096857 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:16.096861 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:16.096866 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:16.096870 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:16.096875 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:16.096880 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:16.096884 | orchestrator | 2026-02-20 05:00:16.096889 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-20 05:00:16.096894 | orchestrator | Friday 20 February 2026 05:00:11 +0000 (0:00:01.909) 0:04:18.546 ******* 2026-02-20 05:00:16.096899 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:16.096903 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:16.096908 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:16.096913 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:16.096917 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:16.096922 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:16.096927 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:16.096931 | orchestrator | 2026-02-20 05:00:16.096948 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-20 05:00:16.096953 | orchestrator | Friday 20 February 2026 05:00:13 +0000 (0:00:02.392) 0:04:20.939 ******* 2026-02-20 05:00:16.096959 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-20 05:00:16.096965 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-20 05:00:16.096972 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-20 05:00:16.096978 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-20 05:00:16.096983 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-20 05:00:16.097005 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-20 05:00:16.097010 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:16.097016 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-20 05:00:16.097024 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-20 05:00:16.097031 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-20 05:00:16.097042 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-20 05:00:16.097050 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-20 05:00:16.097059 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-20 05:00:16.097066 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:16.097087 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-20 05:00:16.097095 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-20 05:00:16.097101 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-20 05:00:16.097108 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-20 05:00:16.097114 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-20 05:00:16.097133 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-20 05:00:16.097142 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-20 05:00:16.097156 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:16.097163 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-20 05:00:16.097176 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-20 05:00:16.097185 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-20 05:00:16.097194 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-20 05:00:16.097206 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-20 05:00:16.097212 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-20 05:00:16.097217 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-20 05:00:16.097222 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-20 05:00:16.097228 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-20 05:00:16.097233 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-20 05:00:16.097239 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-20 05:00:16.097244 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-20 05:00:16.097250 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:16.097255 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:16.097260 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-20 05:00:16.097266 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-20 05:00:16.097272 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-20 05:00:16.097281 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-20 05:00:20.128971 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-20 05:00:20.129053 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-20 05:00:20.129068 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-20 05:00:20.129076 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-20 05:00:20.129084 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-20 05:00:20.129092 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-20 05:00:20.129101 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:20.129130 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-20 05:00:20.129154 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:20.129159 | orchestrator | 2026-02-20 05:00:20.129165 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-20 05:00:20.129171 | orchestrator | Friday 20 February 2026 05:00:16 +0000 (0:00:02.626) 0:04:23.566 ******* 2026-02-20 05:00:20.129178 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:20.129186 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:20.129194 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:20.129202 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:20.129211 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:20.129219 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:20.129226 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:20.129231 | orchestrator | 2026-02-20 05:00:20.129236 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-20 05:00:20.129241 | orchestrator | Friday 20 February 2026 05:00:17 +0000 (0:00:01.893) 0:04:25.459 ******* 2026-02-20 05:00:20.129249 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-20 05:00:20.129258 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-20 05:00:20.129266 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-20 05:00:20.129274 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-20 05:00:20.129283 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-20 05:00:20.129293 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-20 05:00:20.129300 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:20.129309 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-20 05:00:20.129318 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-20 05:00:20.129326 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-20 05:00:20.129333 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-20 05:00:20.129360 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-20 05:00:20.129369 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-20 05:00:20.129377 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:20.129386 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-20 05:00:20.129402 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-20 05:00:20.129408 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-20 05:00:20.129413 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-20 05:00:20.129417 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-20 05:00:20.129422 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-20 05:00:20.129427 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:20.129436 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-20 05:00:20.129441 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-20 05:00:20.129445 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-20 05:00:20.129452 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-20 05:00:20.129460 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-20 05:00:20.129468 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-20 05:00:20.129477 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-20 05:00:20.129486 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-20 05:00:20.129494 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-20 05:00:20.129502 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-20 05:00:20.129510 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-20 05:00:20.129518 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-20 05:00:20.129527 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-20 05:00:20.129536 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-20 05:00:20.129551 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:20.129557 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:20.129566 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-20 05:00:59.359576 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-20 05:00:59.359684 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-20 05:00:59.359701 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-20 05:00:59.359710 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-20 05:00:59.359767 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-20 05:00:59.359779 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-20 05:00:59.359800 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-20 05:00:59.359808 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-20 05:00:59.359815 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:59.359823 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-20 05:00:59.359832 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:59.359843 | orchestrator | 2026-02-20 05:00:59.359852 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-20 05:00:59.359860 | orchestrator | Friday 20 February 2026 05:00:20 +0000 (0:00:02.148) 0:04:27.607 ******* 2026-02-20 05:00:59.359867 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:59.359874 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:59.359880 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:59.359887 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:59.359894 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:59.359901 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:59.359907 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:59.359914 | orchestrator | 2026-02-20 05:00:59.359921 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-20 05:00:59.359928 | orchestrator | Friday 20 February 2026 05:00:22 +0000 (0:00:02.006) 0:04:29.614 ******* 2026-02-20 05:00:59.359935 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:59.359942 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:59.359949 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:59.359955 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:59.359962 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:59.359969 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:59.359976 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:59.359982 | orchestrator | 2026-02-20 05:00:59.359989 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-20 05:00:59.360016 | orchestrator | Friday 20 February 2026 05:00:24 +0000 (0:00:01.936) 0:04:31.551 ******* 2026-02-20 05:00:59.360023 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:59.360030 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:59.360036 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:59.360043 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:59.360050 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:59.360057 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:59.360064 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:59.360070 | orchestrator | 2026-02-20 05:00:59.360077 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-20 05:00:59.360084 | orchestrator | Friday 20 February 2026 05:00:25 +0000 (0:00:01.909) 0:04:33.461 ******* 2026-02-20 05:00:59.360092 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-20 05:00:59.360101 | orchestrator | 2026-02-20 05:00:59.360107 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-20 05:00:59.360114 | orchestrator | Friday 20 February 2026 05:00:28 +0000 (0:00:02.406) 0:04:35.867 ******* 2026-02-20 05:00:59.360122 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-20 05:00:59.360135 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-20 05:00:59.360145 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-20 05:00:59.360153 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-20 05:00:59.360161 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-20 05:00:59.360183 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-20 05:00:59.360192 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-20 05:00:59.360200 | orchestrator | 2026-02-20 05:00:59.360208 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-20 05:00:59.360216 | orchestrator | Friday 20 February 2026 05:00:30 +0000 (0:00:02.214) 0:04:38.081 ******* 2026-02-20 05:00:59.360224 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:59.360232 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:59.360239 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:59.360247 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:59.360255 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:59.360263 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:59.360271 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:59.360279 | orchestrator | 2026-02-20 05:00:59.360287 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-20 05:00:59.360295 | orchestrator | Friday 20 February 2026 05:00:32 +0000 (0:00:01.911) 0:04:39.993 ******* 2026-02-20 05:00:59.360303 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:59.360311 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:59.360319 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:59.360327 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:59.360335 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:59.360343 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:59.360354 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:59.360365 | orchestrator | 2026-02-20 05:00:59.360376 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-20 05:00:59.360384 | orchestrator | Friday 20 February 2026 05:00:34 +0000 (0:00:01.834) 0:04:41.827 ******* 2026-02-20 05:00:59.360391 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:00:59.360404 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:00:59.360413 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:00:59.360421 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:00:59.360435 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:00:59.360443 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:00:59.360450 | orchestrator | ok: [testbed-manager] 2026-02-20 05:00:59.360457 | orchestrator | 2026-02-20 05:00:59.360464 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-20 05:00:59.360471 | orchestrator | Friday 20 February 2026 05:00:36 +0000 (0:00:02.329) 0:04:44.157 ******* 2026-02-20 05:00:59.360478 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:59.360484 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:59.360491 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:59.360498 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:59.360505 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:59.360512 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:59.360518 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:59.360525 | orchestrator | 2026-02-20 05:00:59.360532 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-20 05:00:59.360539 | orchestrator | Friday 20 February 2026 05:00:38 +0000 (0:00:02.141) 0:04:46.299 ******* 2026-02-20 05:00:59.360546 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:59.360553 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:00:59.360559 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:00:59.360566 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:00:59.360573 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:00:59.360580 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:00:59.360586 | orchestrator | skipping: [testbed-manager] 2026-02-20 05:00:59.360593 | orchestrator | 2026-02-20 05:00:59.360600 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-20 05:00:59.360607 | orchestrator | Friday 20 February 2026 05:00:40 +0000 (0:00:02.140) 0:04:48.439 ******* 2026-02-20 05:00:59.360614 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:00:59.360620 | orchestrator | 2026-02-20 05:00:59.360627 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-20 05:00:59.360634 | orchestrator | Friday 20 February 2026 05:00:43 +0000 (0:00:02.704) 0:04:51.143 ******* 2026-02-20 05:00:59.360641 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:00:59.360647 | orchestrator | 2026-02-20 05:00:59.360654 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-20 05:00:59.360661 | orchestrator | 2026-02-20 05:00:59.360668 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:00:59.360675 | orchestrator | Friday 20 February 2026 05:00:46 +0000 (0:00:02.420) 0:04:53.564 ******* 2026-02-20 05:00:59.360681 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:00:59.360688 | orchestrator | 2026-02-20 05:00:59.360695 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:00:59.360702 | orchestrator | Friday 20 February 2026 05:00:47 +0000 (0:00:01.396) 0:04:54.960 ******* 2026-02-20 05:00:59.360708 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:00:59.360715 | orchestrator | 2026-02-20 05:00:59.360736 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-20 05:00:59.360744 | orchestrator | Friday 20 February 2026 05:00:48 +0000 (0:00:01.083) 0:04:56.043 ******* 2026-02-20 05:00:59.360752 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-20 05:00:59.360761 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-20 05:00:59.360781 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-20 05:01:26.629609 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-20 05:01:26.629773 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-20 05:01:26.629827 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}])  2026-02-20 05:01:26.629853 | orchestrator | 2026-02-20 05:01:26.629876 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-20 05:01:26.629896 | orchestrator | 2026-02-20 05:01:26.629908 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-20 05:01:26.629919 | orchestrator | Friday 20 February 2026 05:00:59 +0000 (0:00:10.785) 0:05:06.828 ******* 2026-02-20 05:01:26.629931 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:01:26.629948 | orchestrator | 2026-02-20 05:01:26.629975 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-20 05:01:26.629994 | orchestrator | Friday 20 February 2026 05:01:00 +0000 (0:00:01.414) 0:05:08.243 ******* 2026-02-20 05:01:26.630011 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:01:26.630104 | orchestrator | 2026-02-20 05:01:26.630123 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-20 05:01:26.630142 | orchestrator | Friday 20 February 2026 05:01:01 +0000 (0:00:01.144) 0:05:09.388 ******* 2026-02-20 05:01:26.630172 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:01:26.630193 | orchestrator | 2026-02-20 05:01:26.630213 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-20 05:01:26.630232 | orchestrator | Friday 20 February 2026 05:01:03 +0000 (0:00:01.149) 0:05:10.537 ******* 2026-02-20 05:01:26.630251 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:01:26.630270 | orchestrator | 2026-02-20 05:01:26.630291 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:01:26.630313 | orchestrator | Friday 20 February 2026 05:01:04 +0000 (0:00:01.151) 0:05:11.689 ******* 2026-02-20 05:01:26.630336 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-20 05:01:26.630355 | orchestrator | 2026-02-20 05:01:26.630375 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 05:01:26.630394 | orchestrator | Friday 20 February 2026 05:01:05 +0000 (0:00:01.128) 0:05:12.817 ******* 2026-02-20 05:01:26.630414 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:01:26.630432 | orchestrator | 2026-02-20 05:01:26.630450 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 05:01:26.630462 | orchestrator | Friday 20 February 2026 05:01:06 +0000 (0:00:01.449) 0:05:14.267 ******* 2026-02-20 05:01:26.630473 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:01:26.630484 | orchestrator | 2026-02-20 05:01:26.630529 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:01:26.630555 | orchestrator | Friday 20 February 2026 05:01:07 +0000 (0:00:01.116) 0:05:15.384 ******* 2026-02-20 05:01:26.630575 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:01:26.630593 | orchestrator | 2026-02-20 05:01:26.630611 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:01:26.630629 | orchestrator | Friday 20 February 2026 05:01:09 +0000 (0:00:01.474) 0:05:16.858 ******* 2026-02-20 05:01:26.630646 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:01:26.630662 | orchestrator | 2026-02-20 05:01:26.630679 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 05:01:26.630697 | orchestrator | Friday 20 February 2026 05:01:10 +0000 (0:00:01.156) 0:05:18.014 ******* 2026-02-20 05:01:26.630798 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:01:26.630825 | orchestrator | 2026-02-20 05:01:26.630845 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 05:01:26.630864 | orchestrator | Friday 20 February 2026 05:01:11 +0000 (0:00:01.137) 0:05:19.152 ******* 2026-02-20 05:01:26.630879 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:01:26.630890 | orchestrator | 2026-02-20 05:01:26.630901 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 05:01:26.630914 | orchestrator | Friday 20 February 2026 05:01:12 +0000 (0:00:01.135) 0:05:20.287 ******* 2026-02-20 05:01:26.630925 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:01:26.630937 | orchestrator | 2026-02-20 05:01:26.630948 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 05:01:26.630959 | orchestrator | Friday 20 February 2026 05:01:13 +0000 (0:00:01.109) 0:05:21.397 ******* 2026-02-20 05:01:26.630970 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:01:26.630981 | orchestrator | 2026-02-20 05:01:26.630991 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 05:01:26.631003 | orchestrator | Friday 20 February 2026 05:01:15 +0000 (0:00:01.105) 0:05:22.503 ******* 2026-02-20 05:01:26.631014 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:01:26.631048 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:01:26.631060 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:01:26.631071 | orchestrator | 2026-02-20 05:01:26.631082 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 05:01:26.631093 | orchestrator | Friday 20 February 2026 05:01:16 +0000 (0:00:01.680) 0:05:24.183 ******* 2026-02-20 05:01:26.631104 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:01:26.631115 | orchestrator | 2026-02-20 05:01:26.631126 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 05:01:26.631137 | orchestrator | Friday 20 February 2026 05:01:17 +0000 (0:00:01.221) 0:05:25.404 ******* 2026-02-20 05:01:26.631148 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:01:26.631159 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:01:26.631170 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:01:26.631181 | orchestrator | 2026-02-20 05:01:26.631192 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 05:01:26.631213 | orchestrator | Friday 20 February 2026 05:01:21 +0000 (0:00:03.177) 0:05:28.582 ******* 2026-02-20 05:01:26.631225 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 05:01:26.631236 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 05:01:26.631247 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 05:01:26.631258 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:01:26.631269 | orchestrator | 2026-02-20 05:01:26.631280 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 05:01:26.631290 | orchestrator | Friday 20 February 2026 05:01:22 +0000 (0:00:01.373) 0:05:29.955 ******* 2026-02-20 05:01:26.631316 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 05:01:26.631330 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 05:01:26.631340 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 05:01:26.631350 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:01:26.631360 | orchestrator | 2026-02-20 05:01:26.631369 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 05:01:26.631379 | orchestrator | Friday 20 February 2026 05:01:24 +0000 (0:00:01.863) 0:05:31.819 ******* 2026-02-20 05:01:26.631390 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:01:26.631404 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:01:26.631414 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:01:26.631424 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:01:26.631434 | orchestrator | 2026-02-20 05:01:26.631444 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 05:01:26.631453 | orchestrator | Friday 20 February 2026 05:01:25 +0000 (0:00:01.140) 0:05:32.960 ******* 2026-02-20 05:01:26.631472 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c9a9a7d69b4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 05:01:18.466540', 'end': '2026-02-20 05:01:18.519812', 'delta': '0:00:00.053272', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9a9a7d69b4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 05:01:44.913798 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'b179183cbe33', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 05:01:19.051100', 'end': '2026-02-20 05:01:19.105996', 'delta': '0:00:00.054896', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b179183cbe33'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 05:01:44.913986 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '28a82f95a8fd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 05:01:19.885498', 'end': '2026-02-20 05:01:19.945412', 'delta': '0:00:00.059914', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['28a82f95a8fd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 05:01:44.914099 | orchestrator | 2026-02-20 05:01:44.914131 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 05:01:44.914150 | orchestrator | Friday 20 February 2026 05:01:26 +0000 (0:00:01.142) 0:05:34.102 ******* 2026-02-20 05:01:44.914173 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:01:44.914189 | orchestrator | 2026-02-20 05:01:44.914202 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 05:01:44.914215 | orchestrator | Friday 20 February 2026 05:01:28 +0000 (0:00:01.425) 0:05:35.528 ******* 2026-02-20 05:01:44.914229 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:01:44.914244 | orchestrator | 2026-02-20 05:01:44.914256 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 05:01:44.914275 | orchestrator | Friday 20 February 2026 05:01:29 +0000 (0:00:01.013) 0:05:36.541 ******* 2026-02-20 05:01:44.914294 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:01:44.914316 | orchestrator | 2026-02-20 05:01:44.914336 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 05:01:44.914353 | orchestrator | Friday 20 February 2026 05:01:29 +0000 (0:00:00.903) 0:05:37.445 ******* 2026-02-20 05:01:44.914366 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-20 05:01:44.914386 | orchestrator | 2026-02-20 05:01:44.914406 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:01:44.914425 | orchestrator | Friday 20 February 2026 05:01:32 +0000 (0:00:02.449) 0:05:39.895 ******* 2026-02-20 05:01:44.914447 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:01:44.914466 | orchestrator | 2026-02-20 05:01:44.914481 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 05:01:44.914492 | orchestrator | Friday 20 February 2026 05:01:33 +0000 (0:00:01.119) 0:05:41.014 ******* 2026-02-20 05:01:44.914503 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:01:44.914514 | orchestrator | 2026-02-20 05:01:44.914525 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 05:01:44.914536 | orchestrator | Friday 20 February 2026 05:01:34 +0000 (0:00:01.081) 0:05:42.095 ******* 2026-02-20 05:01:44.914547 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:01:44.914567 | orchestrator | 2026-02-20 05:01:44.914586 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:01:44.914605 | orchestrator | Friday 20 February 2026 05:01:35 +0000 (0:00:01.185) 0:05:43.281 ******* 2026-02-20 05:01:44.914624 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:01:44.914641 | orchestrator | 2026-02-20 05:01:44.914658 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 05:01:44.914676 | orchestrator | Friday 20 February 2026 05:01:36 +0000 (0:00:01.137) 0:05:44.419 ******* 2026-02-20 05:01:44.914695 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:01:44.914714 | orchestrator | 2026-02-20 05:01:44.914759 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 05:01:44.914792 | orchestrator | Friday 20 February 2026 05:01:38 +0000 (0:00:01.126) 0:05:45.545 ******* 2026-02-20 05:01:44.914805 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:01:44.914816 | orchestrator | 2026-02-20 05:01:44.914827 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 05:01:44.914838 | orchestrator | Friday 20 February 2026 05:01:39 +0000 (0:00:01.113) 0:05:46.659 ******* 2026-02-20 05:01:44.914849 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:01:44.914860 | orchestrator | 2026-02-20 05:01:44.914871 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 05:01:44.914882 | orchestrator | Friday 20 February 2026 05:01:40 +0000 (0:00:01.152) 0:05:47.811 ******* 2026-02-20 05:01:44.914893 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:01:44.914904 | orchestrator | 2026-02-20 05:01:44.914915 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 05:01:44.914951 | orchestrator | Friday 20 February 2026 05:01:41 +0000 (0:00:01.111) 0:05:48.922 ******* 2026-02-20 05:01:44.914963 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:01:44.914974 | orchestrator | 2026-02-20 05:01:44.914985 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 05:01:44.914997 | orchestrator | Friday 20 February 2026 05:01:42 +0000 (0:00:01.111) 0:05:50.034 ******* 2026-02-20 05:01:44.915008 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:01:44.915019 | orchestrator | 2026-02-20 05:01:44.915030 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 05:01:44.915041 | orchestrator | Friday 20 February 2026 05:01:43 +0000 (0:00:01.112) 0:05:51.147 ******* 2026-02-20 05:01:44.915063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:01:44.915079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:01:44.915090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:01:44.915103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 05:01:44.915117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:01:44.915137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:01:44.915148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:01:44.915182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0c1d2133', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:01:46.132479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:01:46.132641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:01:46.132669 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:01:46.132780 | orchestrator | 2026-02-20 05:01:46.132803 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 05:01:46.132821 | orchestrator | Friday 20 February 2026 05:01:44 +0000 (0:00:01.232) 0:05:52.379 ******* 2026-02-20 05:01:46.132842 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:01:46.132863 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:01:46.132882 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:01:46.132919 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:01:46.132966 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:01:46.132985 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:01:46.133020 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:01:46.133054 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0c1d2133', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:01:46.133090 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:02:40.590211 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:02:40.590351 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:02:40.590369 | orchestrator | 2026-02-20 05:02:40.590382 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 05:02:40.590394 | orchestrator | Friday 20 February 2026 05:01:46 +0000 (0:00:01.225) 0:05:53.605 ******* 2026-02-20 05:02:40.590409 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:02:40.590429 | orchestrator | 2026-02-20 05:02:40.590448 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 05:02:40.590465 | orchestrator | Friday 20 February 2026 05:01:47 +0000 (0:00:01.504) 0:05:55.110 ******* 2026-02-20 05:02:40.590485 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:02:40.590503 | orchestrator | 2026-02-20 05:02:40.590523 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:02:40.590542 | orchestrator | Friday 20 February 2026 05:01:48 +0000 (0:00:01.143) 0:05:56.254 ******* 2026-02-20 05:02:40.590557 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:02:40.590569 | orchestrator | 2026-02-20 05:02:40.590580 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:02:40.590591 | orchestrator | Friday 20 February 2026 05:01:50 +0000 (0:00:01.463) 0:05:57.718 ******* 2026-02-20 05:02:40.590602 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:02:40.590613 | orchestrator | 2026-02-20 05:02:40.590624 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:02:40.590635 | orchestrator | Friday 20 February 2026 05:01:51 +0000 (0:00:01.110) 0:05:58.828 ******* 2026-02-20 05:02:40.590645 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:02:40.590656 | orchestrator | 2026-02-20 05:02:40.590667 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:02:40.590678 | orchestrator | Friday 20 February 2026 05:01:52 +0000 (0:00:01.226) 0:06:00.054 ******* 2026-02-20 05:02:40.590689 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:02:40.590700 | orchestrator | 2026-02-20 05:02:40.590711 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 05:02:40.590765 | orchestrator | Friday 20 February 2026 05:01:53 +0000 (0:00:01.154) 0:06:01.209 ******* 2026-02-20 05:02:40.590778 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:02:40.590793 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-20 05:02:40.590806 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-20 05:02:40.590819 | orchestrator | 2026-02-20 05:02:40.590831 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 05:02:40.590844 | orchestrator | Friday 20 February 2026 05:01:55 +0000 (0:00:01.893) 0:06:03.103 ******* 2026-02-20 05:02:40.590857 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 05:02:40.590870 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 05:02:40.590884 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 05:02:40.590897 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:02:40.590910 | orchestrator | 2026-02-20 05:02:40.590922 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 05:02:40.590936 | orchestrator | Friday 20 February 2026 05:01:56 +0000 (0:00:01.146) 0:06:04.249 ******* 2026-02-20 05:02:40.590963 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:02:40.590977 | orchestrator | 2026-02-20 05:02:40.590989 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 05:02:40.591002 | orchestrator | Friday 20 February 2026 05:01:57 +0000 (0:00:01.108) 0:06:05.357 ******* 2026-02-20 05:02:40.591016 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:02:40.591038 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:02:40.591051 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:02:40.591062 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:02:40.591073 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:02:40.591084 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:02:40.591095 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:02:40.591105 | orchestrator | 2026-02-20 05:02:40.591116 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 05:02:40.591127 | orchestrator | Friday 20 February 2026 05:01:59 +0000 (0:00:02.054) 0:06:07.411 ******* 2026-02-20 05:02:40.591138 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:02:40.591149 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:02:40.591160 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:02:40.591171 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:02:40.591199 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:02:40.591211 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:02:40.591222 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:02:40.591233 | orchestrator | 2026-02-20 05:02:40.591244 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-20 05:02:40.591255 | orchestrator | Friday 20 February 2026 05:02:02 +0000 (0:00:02.872) 0:06:10.284 ******* 2026-02-20 05:02:40.591265 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-20 05:02:40.591276 | orchestrator | 2026-02-20 05:02:40.591287 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-20 05:02:40.591299 | orchestrator | Friday 20 February 2026 05:02:05 +0000 (0:00:02.281) 0:06:12.565 ******* 2026-02-20 05:02:40.591310 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:02:40.591321 | orchestrator | 2026-02-20 05:02:40.591331 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-20 05:02:40.591342 | orchestrator | Friday 20 February 2026 05:02:06 +0000 (0:00:01.210) 0:06:13.776 ******* 2026-02-20 05:02:40.591353 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:02:40.591364 | orchestrator | 2026-02-20 05:02:40.591375 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-20 05:02:40.591386 | orchestrator | Friday 20 February 2026 05:02:07 +0000 (0:00:01.170) 0:06:14.946 ******* 2026-02-20 05:02:40.591397 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-20 05:02:40.591408 | orchestrator | 2026-02-20 05:02:40.591419 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-20 05:02:40.591429 | orchestrator | Friday 20 February 2026 05:02:09 +0000 (0:00:02.303) 0:06:17.250 ******* 2026-02-20 05:02:40.591440 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:02:40.591451 | orchestrator | 2026-02-20 05:02:40.591470 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-20 05:02:40.591489 | orchestrator | Friday 20 February 2026 05:02:10 +0000 (0:00:01.118) 0:06:18.368 ******* 2026-02-20 05:02:40.591508 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:02:40.591528 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:02:40.591547 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:02:40.591567 | orchestrator | 2026-02-20 05:02:40.591586 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-20 05:02:40.591615 | orchestrator | Friday 20 February 2026 05:02:13 +0000 (0:00:02.461) 0:06:20.829 ******* 2026-02-20 05:02:40.591627 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-20 05:02:40.591638 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-20 05:02:40.591650 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-20 05:02:40.591661 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-20 05:02:40.591672 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-20 05:02:40.591684 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-20 05:02:40.591695 | orchestrator | 2026-02-20 05:02:40.591706 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-20 05:02:40.591743 | orchestrator | Friday 20 February 2026 05:02:27 +0000 (0:00:14.025) 0:06:34.855 ******* 2026-02-20 05:02:40.591755 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:02:40.591766 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:02:40.591777 | orchestrator | 2026-02-20 05:02:40.591795 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-20 05:02:40.591806 | orchestrator | Friday 20 February 2026 05:02:31 +0000 (0:00:03.868) 0:06:38.723 ******* 2026-02-20 05:02:40.591817 | orchestrator | changed: [testbed-node-0] 2026-02-20 05:02:40.591829 | orchestrator | 2026-02-20 05:02:40.591840 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 05:02:40.591851 | orchestrator | Friday 20 February 2026 05:02:33 +0000 (0:00:02.544) 0:06:41.267 ******* 2026-02-20 05:02:40.591862 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-20 05:02:40.591873 | orchestrator | 2026-02-20 05:02:40.591884 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 05:02:40.591895 | orchestrator | Friday 20 February 2026 05:02:35 +0000 (0:00:01.501) 0:06:42.769 ******* 2026-02-20 05:02:40.591905 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-20 05:02:40.591917 | orchestrator | 2026-02-20 05:02:40.591928 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 05:02:40.591939 | orchestrator | Friday 20 February 2026 05:02:36 +0000 (0:00:01.485) 0:06:44.255 ******* 2026-02-20 05:02:40.591950 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:02:40.591961 | orchestrator | 2026-02-20 05:02:40.591971 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 05:02:40.591982 | orchestrator | Friday 20 February 2026 05:02:38 +0000 (0:00:01.548) 0:06:45.803 ******* 2026-02-20 05:02:40.591994 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:02:40.592005 | orchestrator | 2026-02-20 05:02:40.592016 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 05:02:40.592027 | orchestrator | Friday 20 February 2026 05:02:39 +0000 (0:00:01.113) 0:06:46.917 ******* 2026-02-20 05:02:40.592038 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:02:40.592049 | orchestrator | 2026-02-20 05:02:40.592069 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 05:03:31.794914 | orchestrator | Friday 20 February 2026 05:02:40 +0000 (0:00:01.144) 0:06:48.062 ******* 2026-02-20 05:03:31.795023 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795036 | orchestrator | 2026-02-20 05:03:31.795044 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 05:03:31.795052 | orchestrator | Friday 20 February 2026 05:02:41 +0000 (0:00:01.088) 0:06:49.150 ******* 2026-02-20 05:03:31.795059 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:03:31.795067 | orchestrator | 2026-02-20 05:03:31.795074 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 05:03:31.795100 | orchestrator | Friday 20 February 2026 05:02:43 +0000 (0:00:01.506) 0:06:50.657 ******* 2026-02-20 05:03:31.795107 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795114 | orchestrator | 2026-02-20 05:03:31.795121 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 05:03:31.795128 | orchestrator | Friday 20 February 2026 05:02:44 +0000 (0:00:01.101) 0:06:51.758 ******* 2026-02-20 05:03:31.795135 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795142 | orchestrator | 2026-02-20 05:03:31.795148 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 05:03:31.795155 | orchestrator | Friday 20 February 2026 05:02:45 +0000 (0:00:01.130) 0:06:52.888 ******* 2026-02-20 05:03:31.795162 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:03:31.795169 | orchestrator | 2026-02-20 05:03:31.795176 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 05:03:31.795183 | orchestrator | Friday 20 February 2026 05:02:46 +0000 (0:00:01.541) 0:06:54.430 ******* 2026-02-20 05:03:31.795189 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:03:31.795196 | orchestrator | 2026-02-20 05:03:31.795203 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 05:03:31.795211 | orchestrator | Friday 20 February 2026 05:02:48 +0000 (0:00:01.596) 0:06:56.026 ******* 2026-02-20 05:03:31.795218 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795225 | orchestrator | 2026-02-20 05:03:31.795231 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 05:03:31.795238 | orchestrator | Friday 20 February 2026 05:02:49 +0000 (0:00:01.127) 0:06:57.154 ******* 2026-02-20 05:03:31.795245 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:03:31.795252 | orchestrator | 2026-02-20 05:03:31.795259 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 05:03:31.795265 | orchestrator | Friday 20 February 2026 05:02:50 +0000 (0:00:01.167) 0:06:58.321 ******* 2026-02-20 05:03:31.795272 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795279 | orchestrator | 2026-02-20 05:03:31.795286 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 05:03:31.795293 | orchestrator | Friday 20 February 2026 05:02:51 +0000 (0:00:01.105) 0:06:59.427 ******* 2026-02-20 05:03:31.795300 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795307 | orchestrator | 2026-02-20 05:03:31.795313 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 05:03:31.795320 | orchestrator | Friday 20 February 2026 05:02:53 +0000 (0:00:01.122) 0:07:00.550 ******* 2026-02-20 05:03:31.795327 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795334 | orchestrator | 2026-02-20 05:03:31.795343 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 05:03:31.795355 | orchestrator | Friday 20 February 2026 05:02:54 +0000 (0:00:01.108) 0:07:01.658 ******* 2026-02-20 05:03:31.795365 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795376 | orchestrator | 2026-02-20 05:03:31.795387 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 05:03:31.795398 | orchestrator | Friday 20 February 2026 05:02:55 +0000 (0:00:01.120) 0:07:02.779 ******* 2026-02-20 05:03:31.795408 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795417 | orchestrator | 2026-02-20 05:03:31.795426 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 05:03:31.795436 | orchestrator | Friday 20 February 2026 05:02:56 +0000 (0:00:01.096) 0:07:03.876 ******* 2026-02-20 05:03:31.795446 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:03:31.795456 | orchestrator | 2026-02-20 05:03:31.795481 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 05:03:31.795496 | orchestrator | Friday 20 February 2026 05:02:57 +0000 (0:00:01.144) 0:07:05.021 ******* 2026-02-20 05:03:31.795508 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:03:31.795520 | orchestrator | 2026-02-20 05:03:31.795533 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 05:03:31.795556 | orchestrator | Friday 20 February 2026 05:02:58 +0000 (0:00:01.142) 0:07:06.163 ******* 2026-02-20 05:03:31.795568 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:03:31.795579 | orchestrator | 2026-02-20 05:03:31.795590 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 05:03:31.795601 | orchestrator | Friday 20 February 2026 05:02:59 +0000 (0:00:01.176) 0:07:07.339 ******* 2026-02-20 05:03:31.795612 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795623 | orchestrator | 2026-02-20 05:03:31.795635 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 05:03:31.795647 | orchestrator | Friday 20 February 2026 05:03:00 +0000 (0:00:01.112) 0:07:08.452 ******* 2026-02-20 05:03:31.795660 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795672 | orchestrator | 2026-02-20 05:03:31.795683 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 05:03:31.795692 | orchestrator | Friday 20 February 2026 05:03:02 +0000 (0:00:01.173) 0:07:09.626 ******* 2026-02-20 05:03:31.795700 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795710 | orchestrator | 2026-02-20 05:03:31.795722 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 05:03:31.795733 | orchestrator | Friday 20 February 2026 05:03:03 +0000 (0:00:01.143) 0:07:10.770 ******* 2026-02-20 05:03:31.795770 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795782 | orchestrator | 2026-02-20 05:03:31.795793 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 05:03:31.795805 | orchestrator | Friday 20 February 2026 05:03:04 +0000 (0:00:01.129) 0:07:11.899 ******* 2026-02-20 05:03:31.795837 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795849 | orchestrator | 2026-02-20 05:03:31.795860 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 05:03:31.795871 | orchestrator | Friday 20 February 2026 05:03:05 +0000 (0:00:01.114) 0:07:13.013 ******* 2026-02-20 05:03:31.795883 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795894 | orchestrator | 2026-02-20 05:03:31.795905 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 05:03:31.795916 | orchestrator | Friday 20 February 2026 05:03:06 +0000 (0:00:01.102) 0:07:14.116 ******* 2026-02-20 05:03:31.795926 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795936 | orchestrator | 2026-02-20 05:03:31.795948 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 05:03:31.795960 | orchestrator | Friday 20 February 2026 05:03:07 +0000 (0:00:01.121) 0:07:15.238 ******* 2026-02-20 05:03:31.795972 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.795983 | orchestrator | 2026-02-20 05:03:31.795994 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 05:03:31.796005 | orchestrator | Friday 20 February 2026 05:03:08 +0000 (0:00:01.133) 0:07:16.372 ******* 2026-02-20 05:03:31.796017 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.796027 | orchestrator | 2026-02-20 05:03:31.796038 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 05:03:31.796048 | orchestrator | Friday 20 February 2026 05:03:09 +0000 (0:00:01.092) 0:07:17.464 ******* 2026-02-20 05:03:31.796059 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.796070 | orchestrator | 2026-02-20 05:03:31.796082 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 05:03:31.796094 | orchestrator | Friday 20 February 2026 05:03:11 +0000 (0:00:01.149) 0:07:18.614 ******* 2026-02-20 05:03:31.796104 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.796115 | orchestrator | 2026-02-20 05:03:31.796125 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 05:03:31.796136 | orchestrator | Friday 20 February 2026 05:03:12 +0000 (0:00:01.143) 0:07:19.757 ******* 2026-02-20 05:03:31.796147 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.796157 | orchestrator | 2026-02-20 05:03:31.796176 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 05:03:31.796186 | orchestrator | Friday 20 February 2026 05:03:13 +0000 (0:00:01.125) 0:07:20.883 ******* 2026-02-20 05:03:31.796197 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:03:31.796207 | orchestrator | 2026-02-20 05:03:31.796217 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 05:03:31.796228 | orchestrator | Friday 20 February 2026 05:03:15 +0000 (0:00:01.997) 0:07:22.881 ******* 2026-02-20 05:03:31.796238 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:03:31.796248 | orchestrator | 2026-02-20 05:03:31.796259 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 05:03:31.796269 | orchestrator | Friday 20 February 2026 05:03:17 +0000 (0:00:02.423) 0:07:25.304 ******* 2026-02-20 05:03:31.796279 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-20 05:03:31.796291 | orchestrator | 2026-02-20 05:03:31.796301 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-20 05:03:31.796312 | orchestrator | Friday 20 February 2026 05:03:19 +0000 (0:00:01.455) 0:07:26.759 ******* 2026-02-20 05:03:31.796322 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.796333 | orchestrator | 2026-02-20 05:03:31.796343 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-20 05:03:31.796354 | orchestrator | Friday 20 February 2026 05:03:20 +0000 (0:00:01.126) 0:07:27.885 ******* 2026-02-20 05:03:31.796364 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.796374 | orchestrator | 2026-02-20 05:03:31.796385 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-20 05:03:31.796395 | orchestrator | Friday 20 February 2026 05:03:21 +0000 (0:00:01.101) 0:07:28.987 ******* 2026-02-20 05:03:31.796413 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 05:03:31.796423 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 05:03:31.796433 | orchestrator | 2026-02-20 05:03:31.796444 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-20 05:03:31.796454 | orchestrator | Friday 20 February 2026 05:03:23 +0000 (0:00:01.878) 0:07:30.866 ******* 2026-02-20 05:03:31.796464 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:03:31.796475 | orchestrator | 2026-02-20 05:03:31.796485 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-20 05:03:31.796495 | orchestrator | Friday 20 February 2026 05:03:25 +0000 (0:00:01.670) 0:07:32.536 ******* 2026-02-20 05:03:31.796506 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.796516 | orchestrator | 2026-02-20 05:03:31.796527 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-20 05:03:31.796537 | orchestrator | Friday 20 February 2026 05:03:26 +0000 (0:00:01.155) 0:07:33.692 ******* 2026-02-20 05:03:31.796547 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.796558 | orchestrator | 2026-02-20 05:03:31.796568 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 05:03:31.796578 | orchestrator | Friday 20 February 2026 05:03:27 +0000 (0:00:01.120) 0:07:34.812 ******* 2026-02-20 05:03:31.796588 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:03:31.796599 | orchestrator | 2026-02-20 05:03:31.796643 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 05:03:31.796654 | orchestrator | Friday 20 February 2026 05:03:28 +0000 (0:00:01.121) 0:07:35.933 ******* 2026-02-20 05:03:31.796664 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-20 05:03:31.796675 | orchestrator | 2026-02-20 05:03:31.796686 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-20 05:03:31.796696 | orchestrator | Friday 20 February 2026 05:03:29 +0000 (0:00:01.450) 0:07:37.384 ******* 2026-02-20 05:03:31.796706 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:03:31.796717 | orchestrator | 2026-02-20 05:03:31.796735 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-20 05:04:18.550557 | orchestrator | Friday 20 February 2026 05:03:31 +0000 (0:00:01.879) 0:07:39.264 ******* 2026-02-20 05:04:18.550659 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 05:04:18.550672 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 05:04:18.550680 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 05:04:18.550688 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.550697 | orchestrator | 2026-02-20 05:04:18.550705 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-20 05:04:18.550713 | orchestrator | Friday 20 February 2026 05:03:32 +0000 (0:00:01.139) 0:07:40.403 ******* 2026-02-20 05:04:18.550720 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.550727 | orchestrator | 2026-02-20 05:04:18.550735 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-20 05:04:18.550742 | orchestrator | Friday 20 February 2026 05:03:34 +0000 (0:00:01.117) 0:07:41.520 ******* 2026-02-20 05:04:18.550750 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.550757 | orchestrator | 2026-02-20 05:04:18.550764 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-20 05:04:18.550772 | orchestrator | Friday 20 February 2026 05:03:35 +0000 (0:00:01.130) 0:07:42.650 ******* 2026-02-20 05:04:18.550779 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.550786 | orchestrator | 2026-02-20 05:04:18.550794 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-20 05:04:18.550801 | orchestrator | Friday 20 February 2026 05:03:36 +0000 (0:00:01.108) 0:07:43.759 ******* 2026-02-20 05:04:18.550808 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.550815 | orchestrator | 2026-02-20 05:04:18.550823 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-20 05:04:18.550833 | orchestrator | Friday 20 February 2026 05:03:37 +0000 (0:00:01.120) 0:07:44.879 ******* 2026-02-20 05:04:18.550937 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.550948 | orchestrator | 2026-02-20 05:04:18.550959 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 05:04:18.550970 | orchestrator | Friday 20 February 2026 05:03:38 +0000 (0:00:01.171) 0:07:46.051 ******* 2026-02-20 05:04:18.550982 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:04:18.550994 | orchestrator | 2026-02-20 05:04:18.551005 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 05:04:18.551017 | orchestrator | Friday 20 February 2026 05:03:41 +0000 (0:00:02.821) 0:07:48.873 ******* 2026-02-20 05:04:18.551029 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:04:18.551041 | orchestrator | 2026-02-20 05:04:18.551053 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 05:04:18.551065 | orchestrator | Friday 20 February 2026 05:03:42 +0000 (0:00:01.105) 0:07:49.979 ******* 2026-02-20 05:04:18.551075 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-20 05:04:18.551085 | orchestrator | 2026-02-20 05:04:18.551097 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-20 05:04:18.551109 | orchestrator | Friday 20 February 2026 05:03:43 +0000 (0:00:01.430) 0:07:51.410 ******* 2026-02-20 05:04:18.551122 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.551135 | orchestrator | 2026-02-20 05:04:18.551149 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-20 05:04:18.551161 | orchestrator | Friday 20 February 2026 05:03:45 +0000 (0:00:01.136) 0:07:52.547 ******* 2026-02-20 05:04:18.551175 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.551188 | orchestrator | 2026-02-20 05:04:18.551202 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-20 05:04:18.551215 | orchestrator | Friday 20 February 2026 05:03:46 +0000 (0:00:01.163) 0:07:53.710 ******* 2026-02-20 05:04:18.551227 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.551266 | orchestrator | 2026-02-20 05:04:18.551295 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-20 05:04:18.551308 | orchestrator | Friday 20 February 2026 05:03:47 +0000 (0:00:01.119) 0:07:54.830 ******* 2026-02-20 05:04:18.551319 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.551331 | orchestrator | 2026-02-20 05:04:18.551342 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-20 05:04:18.551354 | orchestrator | Friday 20 February 2026 05:03:48 +0000 (0:00:01.115) 0:07:55.945 ******* 2026-02-20 05:04:18.551366 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.551378 | orchestrator | 2026-02-20 05:04:18.551389 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-20 05:04:18.551400 | orchestrator | Friday 20 February 2026 05:03:49 +0000 (0:00:01.120) 0:07:57.066 ******* 2026-02-20 05:04:18.551409 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.551421 | orchestrator | 2026-02-20 05:04:18.551432 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-20 05:04:18.551443 | orchestrator | Friday 20 February 2026 05:03:50 +0000 (0:00:01.144) 0:07:58.211 ******* 2026-02-20 05:04:18.551455 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.551466 | orchestrator | 2026-02-20 05:04:18.551479 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-20 05:04:18.551490 | orchestrator | Friday 20 February 2026 05:03:51 +0000 (0:00:01.151) 0:07:59.362 ******* 2026-02-20 05:04:18.551502 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.551514 | orchestrator | 2026-02-20 05:04:18.551525 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-20 05:04:18.551536 | orchestrator | Friday 20 February 2026 05:03:53 +0000 (0:00:01.137) 0:08:00.500 ******* 2026-02-20 05:04:18.551548 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:04:18.551560 | orchestrator | 2026-02-20 05:04:18.551572 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 05:04:18.551586 | orchestrator | Friday 20 February 2026 05:03:54 +0000 (0:00:01.131) 0:08:01.631 ******* 2026-02-20 05:04:18.551599 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-20 05:04:18.551612 | orchestrator | 2026-02-20 05:04:18.551645 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-20 05:04:18.551659 | orchestrator | Friday 20 February 2026 05:03:55 +0000 (0:00:01.488) 0:08:03.120 ******* 2026-02-20 05:04:18.551670 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-20 05:04:18.551683 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-20 05:04:18.551695 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-20 05:04:18.551706 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-20 05:04:18.551718 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-20 05:04:18.551729 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-20 05:04:18.551741 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-20 05:04:18.551752 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-20 05:04:18.551763 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 05:04:18.551774 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 05:04:18.551786 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 05:04:18.551797 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 05:04:18.551808 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 05:04:18.551819 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 05:04:18.551831 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-20 05:04:18.551864 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-20 05:04:18.551876 | orchestrator | 2026-02-20 05:04:18.551887 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 05:04:18.551908 | orchestrator | Friday 20 February 2026 05:04:02 +0000 (0:00:07.107) 0:08:10.227 ******* 2026-02-20 05:04:18.551919 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.551930 | orchestrator | 2026-02-20 05:04:18.551942 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 05:04:18.551953 | orchestrator | Friday 20 February 2026 05:04:03 +0000 (0:00:01.128) 0:08:11.356 ******* 2026-02-20 05:04:18.551965 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.551976 | orchestrator | 2026-02-20 05:04:18.551987 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 05:04:18.551999 | orchestrator | Friday 20 February 2026 05:04:04 +0000 (0:00:01.108) 0:08:12.465 ******* 2026-02-20 05:04:18.552010 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.552022 | orchestrator | 2026-02-20 05:04:18.552034 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 05:04:18.552045 | orchestrator | Friday 20 February 2026 05:04:06 +0000 (0:00:01.123) 0:08:13.588 ******* 2026-02-20 05:04:18.552056 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.552067 | orchestrator | 2026-02-20 05:04:18.552079 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 05:04:18.552090 | orchestrator | Friday 20 February 2026 05:04:07 +0000 (0:00:01.111) 0:08:14.700 ******* 2026-02-20 05:04:18.552101 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.552113 | orchestrator | 2026-02-20 05:04:18.552124 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 05:04:18.552135 | orchestrator | Friday 20 February 2026 05:04:08 +0000 (0:00:01.121) 0:08:15.822 ******* 2026-02-20 05:04:18.552147 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.552158 | orchestrator | 2026-02-20 05:04:18.552169 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 05:04:18.552181 | orchestrator | Friday 20 February 2026 05:04:09 +0000 (0:00:01.110) 0:08:16.932 ******* 2026-02-20 05:04:18.552191 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.552203 | orchestrator | 2026-02-20 05:04:18.552221 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 05:04:18.552232 | orchestrator | Friday 20 February 2026 05:04:10 +0000 (0:00:01.144) 0:08:18.077 ******* 2026-02-20 05:04:18.552244 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.552255 | orchestrator | 2026-02-20 05:04:18.552266 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 05:04:18.552278 | orchestrator | Friday 20 February 2026 05:04:11 +0000 (0:00:01.097) 0:08:19.174 ******* 2026-02-20 05:04:18.552289 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.552300 | orchestrator | 2026-02-20 05:04:18.552311 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 05:04:18.552322 | orchestrator | Friday 20 February 2026 05:04:12 +0000 (0:00:01.102) 0:08:20.277 ******* 2026-02-20 05:04:18.552333 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.552345 | orchestrator | 2026-02-20 05:04:18.552355 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 05:04:18.552368 | orchestrator | Friday 20 February 2026 05:04:13 +0000 (0:00:01.119) 0:08:21.397 ******* 2026-02-20 05:04:18.552380 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.552393 | orchestrator | 2026-02-20 05:04:18.552404 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 05:04:18.552416 | orchestrator | Friday 20 February 2026 05:04:15 +0000 (0:00:01.144) 0:08:22.541 ******* 2026-02-20 05:04:18.552427 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.552438 | orchestrator | 2026-02-20 05:04:18.552449 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 05:04:18.552460 | orchestrator | Friday 20 February 2026 05:04:16 +0000 (0:00:01.129) 0:08:23.671 ******* 2026-02-20 05:04:18.552479 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.552491 | orchestrator | 2026-02-20 05:04:18.552501 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 05:04:18.552513 | orchestrator | Friday 20 February 2026 05:04:17 +0000 (0:00:01.223) 0:08:24.894 ******* 2026-02-20 05:04:18.552524 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:04:18.552535 | orchestrator | 2026-02-20 05:04:18.552554 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 05:05:13.264261 | orchestrator | Friday 20 February 2026 05:04:18 +0000 (0:00:01.125) 0:08:26.020 ******* 2026-02-20 05:05:13.264367 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:05:13.264380 | orchestrator | 2026-02-20 05:05:13.264389 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 05:05:13.264398 | orchestrator | Friday 20 February 2026 05:04:19 +0000 (0:00:01.190) 0:08:27.210 ******* 2026-02-20 05:05:13.264407 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:05:13.264415 | orchestrator | 2026-02-20 05:05:13.264424 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 05:05:13.264432 | orchestrator | Friday 20 February 2026 05:04:20 +0000 (0:00:01.120) 0:08:28.331 ******* 2026-02-20 05:05:13.264440 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:05:13.264448 | orchestrator | 2026-02-20 05:05:13.264457 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:05:13.264467 | orchestrator | Friday 20 February 2026 05:04:21 +0000 (0:00:01.129) 0:08:29.461 ******* 2026-02-20 05:05:13.264476 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:05:13.264490 | orchestrator | 2026-02-20 05:05:13.264503 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:05:13.264517 | orchestrator | Friday 20 February 2026 05:04:23 +0000 (0:00:01.122) 0:08:30.584 ******* 2026-02-20 05:05:13.264530 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:05:13.264543 | orchestrator | 2026-02-20 05:05:13.264556 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:05:13.264569 | orchestrator | Friday 20 February 2026 05:04:24 +0000 (0:00:01.148) 0:08:31.732 ******* 2026-02-20 05:05:13.264582 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:05:13.264595 | orchestrator | 2026-02-20 05:05:13.264609 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:05:13.264622 | orchestrator | Friday 20 February 2026 05:04:25 +0000 (0:00:01.127) 0:08:32.859 ******* 2026-02-20 05:05:13.264635 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:05:13.264648 | orchestrator | 2026-02-20 05:05:13.264662 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:05:13.264674 | orchestrator | Friday 20 February 2026 05:04:26 +0000 (0:00:01.160) 0:08:34.020 ******* 2026-02-20 05:05:13.264688 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-20 05:05:13.264702 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-20 05:05:13.264716 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-20 05:05:13.264730 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:05:13.264743 | orchestrator | 2026-02-20 05:05:13.264757 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:05:13.264771 | orchestrator | Friday 20 February 2026 05:04:28 +0000 (0:00:01.695) 0:08:35.715 ******* 2026-02-20 05:05:13.264784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-20 05:05:13.264799 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-20 05:05:13.264815 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-20 05:05:13.264830 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:05:13.264845 | orchestrator | 2026-02-20 05:05:13.264861 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:05:13.264879 | orchestrator | Friday 20 February 2026 05:04:29 +0000 (0:00:01.374) 0:08:37.089 ******* 2026-02-20 05:05:13.264920 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-20 05:05:13.264936 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-20 05:05:13.265010 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-20 05:05:13.265026 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:05:13.265041 | orchestrator | 2026-02-20 05:05:13.265147 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:05:13.265168 | orchestrator | Friday 20 February 2026 05:04:31 +0000 (0:00:01.517) 0:08:38.607 ******* 2026-02-20 05:05:13.265183 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:05:13.265197 | orchestrator | 2026-02-20 05:05:13.265212 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:05:13.265226 | orchestrator | Friday 20 February 2026 05:04:32 +0000 (0:00:01.124) 0:08:39.732 ******* 2026-02-20 05:05:13.265240 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-20 05:05:13.265254 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:05:13.265270 | orchestrator | 2026-02-20 05:05:13.265285 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 05:05:13.265318 | orchestrator | Friday 20 February 2026 05:04:33 +0000 (0:00:01.391) 0:08:41.123 ******* 2026-02-20 05:05:13.265333 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:05:13.265348 | orchestrator | 2026-02-20 05:05:13.265363 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-20 05:05:13.265378 | orchestrator | Friday 20 February 2026 05:04:35 +0000 (0:00:01.757) 0:08:42.881 ******* 2026-02-20 05:05:13.265392 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:05:13.265406 | orchestrator | 2026-02-20 05:05:13.265419 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-20 05:05:13.265433 | orchestrator | Friday 20 February 2026 05:04:36 +0000 (0:00:01.135) 0:08:44.016 ******* 2026-02-20 05:05:13.265446 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-20 05:05:13.265461 | orchestrator | 2026-02-20 05:05:13.265473 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-20 05:05:13.265487 | orchestrator | Friday 20 February 2026 05:04:38 +0000 (0:00:01.472) 0:08:45.488 ******* 2026-02-20 05:05:13.265500 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-20 05:05:13.265513 | orchestrator | 2026-02-20 05:05:13.265526 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-20 05:05:13.265539 | orchestrator | Friday 20 February 2026 05:04:41 +0000 (0:00:03.700) 0:08:49.189 ******* 2026-02-20 05:05:13.265552 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:05:13.265563 | orchestrator | 2026-02-20 05:05:13.265598 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-20 05:05:13.265612 | orchestrator | Friday 20 February 2026 05:04:42 +0000 (0:00:01.144) 0:08:50.334 ******* 2026-02-20 05:05:13.265624 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:05:13.265636 | orchestrator | 2026-02-20 05:05:13.265649 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-20 05:05:13.265660 | orchestrator | Friday 20 February 2026 05:04:43 +0000 (0:00:01.136) 0:08:51.470 ******* 2026-02-20 05:05:13.265672 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:05:13.265684 | orchestrator | 2026-02-20 05:05:13.265697 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-20 05:05:13.265709 | orchestrator | Friday 20 February 2026 05:04:45 +0000 (0:00:01.165) 0:08:52.636 ******* 2026-02-20 05:05:13.265721 | orchestrator | changed: [testbed-node-0] 2026-02-20 05:05:13.265733 | orchestrator | 2026-02-20 05:05:13.265745 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-20 05:05:13.265757 | orchestrator | Friday 20 February 2026 05:04:47 +0000 (0:00:02.110) 0:08:54.747 ******* 2026-02-20 05:05:13.265770 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:05:13.265782 | orchestrator | 2026-02-20 05:05:13.265794 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-20 05:05:13.265822 | orchestrator | Friday 20 February 2026 05:04:48 +0000 (0:00:01.606) 0:08:56.353 ******* 2026-02-20 05:05:13.265833 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:05:13.265846 | orchestrator | 2026-02-20 05:05:13.265858 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-20 05:05:13.265871 | orchestrator | Friday 20 February 2026 05:04:50 +0000 (0:00:01.523) 0:08:57.877 ******* 2026-02-20 05:05:13.265885 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:05:13.265898 | orchestrator | 2026-02-20 05:05:13.265910 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-20 05:05:13.265923 | orchestrator | Friday 20 February 2026 05:04:51 +0000 (0:00:01.464) 0:08:59.342 ******* 2026-02-20 05:05:13.265936 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:05:13.265972 | orchestrator | 2026-02-20 05:05:13.265987 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-20 05:05:13.266001 | orchestrator | Friday 20 February 2026 05:04:53 +0000 (0:00:01.661) 0:09:01.003 ******* 2026-02-20 05:05:13.266078 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:05:13.266100 | orchestrator | 2026-02-20 05:05:13.266113 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-20 05:05:13.266128 | orchestrator | Friday 20 February 2026 05:04:55 +0000 (0:00:01.710) 0:09:02.714 ******* 2026-02-20 05:05:13.266141 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-20 05:05:13.266154 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-20 05:05:13.266165 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-20 05:05:13.266173 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-20 05:05:13.266181 | orchestrator | 2026-02-20 05:05:13.266189 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-20 05:05:13.266197 | orchestrator | Friday 20 February 2026 05:04:59 +0000 (0:00:03.919) 0:09:06.634 ******* 2026-02-20 05:05:13.266205 | orchestrator | changed: [testbed-node-0] 2026-02-20 05:05:13.266213 | orchestrator | 2026-02-20 05:05:13.266220 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-20 05:05:13.266228 | orchestrator | Friday 20 February 2026 05:05:01 +0000 (0:00:02.015) 0:09:08.649 ******* 2026-02-20 05:05:13.266236 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:05:13.266244 | orchestrator | 2026-02-20 05:05:13.266252 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-20 05:05:13.266260 | orchestrator | Friday 20 February 2026 05:05:02 +0000 (0:00:01.133) 0:09:09.782 ******* 2026-02-20 05:05:13.266267 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:05:13.266275 | orchestrator | 2026-02-20 05:05:13.266292 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-20 05:05:13.266301 | orchestrator | Friday 20 February 2026 05:05:03 +0000 (0:00:01.177) 0:09:10.960 ******* 2026-02-20 05:05:13.266309 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:05:13.266317 | orchestrator | 2026-02-20 05:05:13.266325 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-20 05:05:13.266332 | orchestrator | Friday 20 February 2026 05:05:05 +0000 (0:00:02.114) 0:09:13.075 ******* 2026-02-20 05:05:13.266340 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:05:13.266348 | orchestrator | 2026-02-20 05:05:13.266356 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-20 05:05:13.266364 | orchestrator | Friday 20 February 2026 05:05:07 +0000 (0:00:01.479) 0:09:14.555 ******* 2026-02-20 05:05:13.266372 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:05:13.266380 | orchestrator | 2026-02-20 05:05:13.266387 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-20 05:05:13.266395 | orchestrator | Friday 20 February 2026 05:05:08 +0000 (0:00:01.108) 0:09:15.663 ******* 2026-02-20 05:05:13.266403 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-20 05:05:13.266411 | orchestrator | 2026-02-20 05:05:13.266419 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-20 05:05:13.266436 | orchestrator | Friday 20 February 2026 05:05:09 +0000 (0:00:01.420) 0:09:17.084 ******* 2026-02-20 05:05:13.266444 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:05:13.266452 | orchestrator | 2026-02-20 05:05:13.266459 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-20 05:05:13.266467 | orchestrator | Friday 20 February 2026 05:05:10 +0000 (0:00:01.084) 0:09:18.168 ******* 2026-02-20 05:05:13.266475 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:05:13.266483 | orchestrator | 2026-02-20 05:05:13.266491 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-20 05:05:13.266499 | orchestrator | Friday 20 February 2026 05:05:11 +0000 (0:00:01.105) 0:09:19.274 ******* 2026-02-20 05:05:13.266507 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-20 05:05:13.266515 | orchestrator | 2026-02-20 05:05:13.266534 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-20 05:06:05.501464 | orchestrator | Friday 20 February 2026 05:05:13 +0000 (0:00:01.460) 0:09:20.735 ******* 2026-02-20 05:06:05.501573 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:06:05.501587 | orchestrator | 2026-02-20 05:06:05.501598 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-20 05:06:05.501608 | orchestrator | Friday 20 February 2026 05:05:15 +0000 (0:00:02.286) 0:09:23.022 ******* 2026-02-20 05:06:05.501617 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:06:05.501627 | orchestrator | 2026-02-20 05:06:05.501636 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-20 05:06:05.501645 | orchestrator | Friday 20 February 2026 05:05:17 +0000 (0:00:01.998) 0:09:25.020 ******* 2026-02-20 05:06:05.501654 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:06:05.501663 | orchestrator | 2026-02-20 05:06:05.501673 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-20 05:06:05.501682 | orchestrator | Friday 20 February 2026 05:05:19 +0000 (0:00:02.442) 0:09:27.463 ******* 2026-02-20 05:06:05.501691 | orchestrator | changed: [testbed-node-0] 2026-02-20 05:06:05.501701 | orchestrator | 2026-02-20 05:06:05.501710 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-20 05:06:05.501719 | orchestrator | Friday 20 February 2026 05:05:23 +0000 (0:00:03.339) 0:09:30.803 ******* 2026-02-20 05:06:05.501728 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-20 05:06:05.501738 | orchestrator | 2026-02-20 05:06:05.501747 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-20 05:06:05.501755 | orchestrator | Friday 20 February 2026 05:05:24 +0000 (0:00:01.535) 0:09:32.338 ******* 2026-02-20 05:06:05.501765 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:06:05.501774 | orchestrator | 2026-02-20 05:06:05.501783 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-20 05:06:05.501792 | orchestrator | Friday 20 February 2026 05:05:27 +0000 (0:00:02.268) 0:09:34.607 ******* 2026-02-20 05:06:05.501801 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:06:05.501810 | orchestrator | 2026-02-20 05:06:05.501819 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-20 05:06:05.501828 | orchestrator | Friday 20 February 2026 05:05:30 +0000 (0:00:03.308) 0:09:37.915 ******* 2026-02-20 05:06:05.501837 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:06:05.501846 | orchestrator | 2026-02-20 05:06:05.501855 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-20 05:06:05.501941 | orchestrator | Friday 20 February 2026 05:05:31 +0000 (0:00:01.110) 0:09:39.026 ******* 2026-02-20 05:06:05.501955 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-20 05:06:05.501990 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-20 05:06:05.502120 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-20 05:06:05.502148 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-20 05:06:05.502163 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-20 05:06:05.502178 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}])  2026-02-20 05:06:05.502194 | orchestrator | 2026-02-20 05:06:05.502233 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-20 05:06:05.502247 | orchestrator | Friday 20 February 2026 05:05:41 +0000 (0:00:10.266) 0:09:49.293 ******* 2026-02-20 05:06:05.502261 | orchestrator | changed: [testbed-node-0] 2026-02-20 05:06:05.502277 | orchestrator | 2026-02-20 05:06:05.502292 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 05:06:05.502305 | orchestrator | Friday 20 February 2026 05:05:44 +0000 (0:00:02.510) 0:09:51.803 ******* 2026-02-20 05:06:05.502314 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:06:05.502323 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-20 05:06:05.502332 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-20 05:06:05.502341 | orchestrator | 2026-02-20 05:06:05.502349 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 05:06:05.502358 | orchestrator | Friday 20 February 2026 05:05:46 +0000 (0:00:02.129) 0:09:53.933 ******* 2026-02-20 05:06:05.502367 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 05:06:05.502376 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 05:06:05.502384 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 05:06:05.502393 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:06:05.502402 | orchestrator | 2026-02-20 05:06:05.502411 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-20 05:06:05.502420 | orchestrator | Friday 20 February 2026 05:05:47 +0000 (0:00:01.363) 0:09:55.297 ******* 2026-02-20 05:06:05.502429 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:06:05.502438 | orchestrator | 2026-02-20 05:06:05.502447 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-20 05:06:05.502456 | orchestrator | Friday 20 February 2026 05:05:48 +0000 (0:00:01.106) 0:09:56.403 ******* 2026-02-20 05:06:05.502465 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:06:05.502484 | orchestrator | 2026-02-20 05:06:05.502493 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-20 05:06:05.502502 | orchestrator | 2026-02-20 05:06:05.502511 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-20 05:06:05.502519 | orchestrator | Friday 20 February 2026 05:05:51 +0000 (0:00:02.131) 0:09:58.535 ******* 2026-02-20 05:06:05.502528 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:05.502537 | orchestrator | 2026-02-20 05:06:05.502545 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-20 05:06:05.502554 | orchestrator | Friday 20 February 2026 05:05:52 +0000 (0:00:01.215) 0:09:59.750 ******* 2026-02-20 05:06:05.502563 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:05.502572 | orchestrator | 2026-02-20 05:06:05.502581 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-20 05:06:05.502590 | orchestrator | Friday 20 February 2026 05:05:53 +0000 (0:00:00.811) 0:10:00.562 ******* 2026-02-20 05:06:05.502598 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:05.502607 | orchestrator | 2026-02-20 05:06:05.502616 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-20 05:06:05.502625 | orchestrator | Friday 20 February 2026 05:05:53 +0000 (0:00:00.760) 0:10:01.323 ******* 2026-02-20 05:06:05.502633 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:05.502642 | orchestrator | 2026-02-20 05:06:05.502651 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:06:05.502659 | orchestrator | Friday 20 February 2026 05:05:54 +0000 (0:00:00.780) 0:10:02.103 ******* 2026-02-20 05:06:05.502668 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-20 05:06:05.502677 | orchestrator | 2026-02-20 05:06:05.502686 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 05:06:05.502702 | orchestrator | Friday 20 February 2026 05:05:55 +0000 (0:00:01.137) 0:10:03.240 ******* 2026-02-20 05:06:05.502711 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:05.502720 | orchestrator | 2026-02-20 05:06:05.502729 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 05:06:05.502737 | orchestrator | Friday 20 February 2026 05:05:57 +0000 (0:00:01.464) 0:10:04.705 ******* 2026-02-20 05:06:05.502746 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:05.502755 | orchestrator | 2026-02-20 05:06:05.502763 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:06:05.502772 | orchestrator | Friday 20 February 2026 05:05:58 +0000 (0:00:01.145) 0:10:05.851 ******* 2026-02-20 05:06:05.502781 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:05.502790 | orchestrator | 2026-02-20 05:06:05.502799 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:06:05.502807 | orchestrator | Friday 20 February 2026 05:05:59 +0000 (0:00:01.466) 0:10:07.317 ******* 2026-02-20 05:06:05.502816 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:05.502825 | orchestrator | 2026-02-20 05:06:05.502834 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 05:06:05.502842 | orchestrator | Friday 20 February 2026 05:06:00 +0000 (0:00:01.123) 0:10:08.441 ******* 2026-02-20 05:06:05.502851 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:05.502860 | orchestrator | 2026-02-20 05:06:05.502869 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 05:06:05.502877 | orchestrator | Friday 20 February 2026 05:06:02 +0000 (0:00:01.141) 0:10:09.582 ******* 2026-02-20 05:06:05.502886 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:05.502895 | orchestrator | 2026-02-20 05:06:05.502904 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 05:06:05.502913 | orchestrator | Friday 20 February 2026 05:06:03 +0000 (0:00:01.148) 0:10:10.731 ******* 2026-02-20 05:06:05.502921 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:05.502931 | orchestrator | 2026-02-20 05:06:05.502939 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 05:06:05.502954 | orchestrator | Friday 20 February 2026 05:06:04 +0000 (0:00:01.126) 0:10:11.858 ******* 2026-02-20 05:06:05.502963 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:05.502972 | orchestrator | 2026-02-20 05:06:05.502980 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 05:06:05.502995 | orchestrator | Friday 20 February 2026 05:06:05 +0000 (0:00:01.111) 0:10:12.970 ******* 2026-02-20 05:06:29.018642 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:06:29.018765 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-20 05:06:29.018782 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:06:29.018795 | orchestrator | 2026-02-20 05:06:29.018807 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 05:06:29.018821 | orchestrator | Friday 20 February 2026 05:06:07 +0000 (0:00:01.616) 0:10:14.586 ******* 2026-02-20 05:06:29.018832 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:29.018861 | orchestrator | 2026-02-20 05:06:29.018882 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 05:06:29.018895 | orchestrator | Friday 20 February 2026 05:06:08 +0000 (0:00:01.220) 0:10:15.807 ******* 2026-02-20 05:06:29.018907 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:06:29.018920 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-20 05:06:29.018931 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:06:29.018943 | orchestrator | 2026-02-20 05:06:29.018954 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 05:06:29.018965 | orchestrator | Friday 20 February 2026 05:06:11 +0000 (0:00:02.876) 0:10:18.683 ******* 2026-02-20 05:06:29.018978 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-20 05:06:29.018990 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-20 05:06:29.019001 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-20 05:06:29.019014 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:29.019025 | orchestrator | 2026-02-20 05:06:29.019037 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 05:06:29.019048 | orchestrator | Friday 20 February 2026 05:06:12 +0000 (0:00:01.402) 0:10:20.085 ******* 2026-02-20 05:06:29.019062 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 05:06:29.019121 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 05:06:29.019132 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 05:06:29.019144 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:29.019156 | orchestrator | 2026-02-20 05:06:29.019167 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 05:06:29.019179 | orchestrator | Friday 20 February 2026 05:06:14 +0000 (0:00:01.637) 0:10:21.723 ******* 2026-02-20 05:06:29.019211 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:06:29.019254 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:06:29.019267 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:06:29.019279 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:29.019291 | orchestrator | 2026-02-20 05:06:29.019303 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 05:06:29.019314 | orchestrator | Friday 20 February 2026 05:06:15 +0000 (0:00:01.171) 0:10:22.895 ******* 2026-02-20 05:06:29.019347 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'fa7fdf54d9d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 05:06:08.823440', 'end': '2026-02-20 05:06:08.872742', 'delta': '0:00:00.049302', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fa7fdf54d9d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 05:06:29.019363 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'b179183cbe33', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 05:06:09.376321', 'end': '2026-02-20 05:06:09.414471', 'delta': '0:00:00.038150', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b179183cbe33'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 05:06:29.019376 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '28a82f95a8fd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 05:06:09.991381', 'end': '2026-02-20 05:06:10.044546', 'delta': '0:00:00.053165', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['28a82f95a8fd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 05:06:29.019388 | orchestrator | 2026-02-20 05:06:29.019400 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 05:06:29.019411 | orchestrator | Friday 20 February 2026 05:06:16 +0000 (0:00:01.180) 0:10:24.076 ******* 2026-02-20 05:06:29.019422 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:29.019434 | orchestrator | 2026-02-20 05:06:29.019445 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 05:06:29.019456 | orchestrator | Friday 20 February 2026 05:06:17 +0000 (0:00:01.226) 0:10:25.303 ******* 2026-02-20 05:06:29.019475 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:29.019486 | orchestrator | 2026-02-20 05:06:29.019498 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 05:06:29.019514 | orchestrator | Friday 20 February 2026 05:06:19 +0000 (0:00:01.271) 0:10:26.574 ******* 2026-02-20 05:06:29.019526 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:29.019536 | orchestrator | 2026-02-20 05:06:29.019543 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 05:06:29.019550 | orchestrator | Friday 20 February 2026 05:06:20 +0000 (0:00:01.143) 0:10:27.718 ******* 2026-02-20 05:06:29.019556 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:06:29.019564 | orchestrator | 2026-02-20 05:06:29.019576 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:06:29.019587 | orchestrator | Friday 20 February 2026 05:06:22 +0000 (0:00:01.975) 0:10:29.694 ******* 2026-02-20 05:06:29.019598 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:29.019609 | orchestrator | 2026-02-20 05:06:29.019620 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 05:06:29.019630 | orchestrator | Friday 20 February 2026 05:06:23 +0000 (0:00:01.125) 0:10:30.819 ******* 2026-02-20 05:06:29.019640 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:29.019650 | orchestrator | 2026-02-20 05:06:29.019661 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 05:06:29.019671 | orchestrator | Friday 20 February 2026 05:06:24 +0000 (0:00:01.096) 0:10:31.915 ******* 2026-02-20 05:06:29.019681 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:29.019691 | orchestrator | 2026-02-20 05:06:29.019702 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:06:29.019713 | orchestrator | Friday 20 February 2026 05:06:25 +0000 (0:00:01.220) 0:10:33.135 ******* 2026-02-20 05:06:29.019723 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:29.019733 | orchestrator | 2026-02-20 05:06:29.019744 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 05:06:29.019756 | orchestrator | Friday 20 February 2026 05:06:26 +0000 (0:00:01.113) 0:10:34.249 ******* 2026-02-20 05:06:29.019769 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:29.019780 | orchestrator | 2026-02-20 05:06:29.019792 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 05:06:29.019804 | orchestrator | Friday 20 February 2026 05:06:27 +0000 (0:00:01.108) 0:10:35.358 ******* 2026-02-20 05:06:29.019811 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:29.019818 | orchestrator | 2026-02-20 05:06:29.019825 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 05:06:29.019840 | orchestrator | Friday 20 February 2026 05:06:29 +0000 (0:00:01.130) 0:10:36.489 ******* 2026-02-20 05:06:36.013512 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:36.013614 | orchestrator | 2026-02-20 05:06:36.013627 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 05:06:36.013639 | orchestrator | Friday 20 February 2026 05:06:30 +0000 (0:00:01.119) 0:10:37.608 ******* 2026-02-20 05:06:36.013648 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:36.013658 | orchestrator | 2026-02-20 05:06:36.013667 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 05:06:36.013676 | orchestrator | Friday 20 February 2026 05:06:31 +0000 (0:00:01.116) 0:10:38.725 ******* 2026-02-20 05:06:36.013685 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:36.013694 | orchestrator | 2026-02-20 05:06:36.013703 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 05:06:36.013713 | orchestrator | Friday 20 February 2026 05:06:32 +0000 (0:00:01.113) 0:10:39.838 ******* 2026-02-20 05:06:36.013722 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:36.013731 | orchestrator | 2026-02-20 05:06:36.013740 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 05:06:36.013749 | orchestrator | Friday 20 February 2026 05:06:33 +0000 (0:00:01.182) 0:10:41.021 ******* 2026-02-20 05:06:36.013782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:06:36.013795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:06:36.013804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:06:36.013828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-29-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 05:06:36.013841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:06:36.013850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:06:36.013876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:06:36.013917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6a45b1b5', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:06:36.013941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:06:36.013956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:06:36.013966 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:36.013975 | orchestrator | 2026-02-20 05:06:36.013985 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 05:06:36.013994 | orchestrator | Friday 20 February 2026 05:06:34 +0000 (0:00:01.281) 0:10:42.303 ******* 2026-02-20 05:06:36.014004 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:06:36.014075 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:06:44.692499 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:06:44.692626 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-29-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:06:44.692641 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:06:44.692664 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:06:44.692672 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:06:44.692696 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6a45b1b5', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:06:44.692714 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:06:44.692790 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:06:44.692804 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:44.692813 | orchestrator | 2026-02-20 05:06:44.692821 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 05:06:44.692830 | orchestrator | Friday 20 February 2026 05:06:36 +0000 (0:00:01.188) 0:10:43.491 ******* 2026-02-20 05:06:44.692838 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:44.692848 | orchestrator | 2026-02-20 05:06:44.692857 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 05:06:44.692862 | orchestrator | Friday 20 February 2026 05:06:37 +0000 (0:00:01.507) 0:10:44.998 ******* 2026-02-20 05:06:44.692867 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:44.692871 | orchestrator | 2026-02-20 05:06:44.692876 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:06:44.692880 | orchestrator | Friday 20 February 2026 05:06:38 +0000 (0:00:01.162) 0:10:46.161 ******* 2026-02-20 05:06:44.692885 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:06:44.692890 | orchestrator | 2026-02-20 05:06:44.692894 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:06:44.692899 | orchestrator | Friday 20 February 2026 05:06:41 +0000 (0:00:02.488) 0:10:48.650 ******* 2026-02-20 05:06:44.692909 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:44.692913 | orchestrator | 2026-02-20 05:06:44.692918 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:06:44.692923 | orchestrator | Friday 20 February 2026 05:06:42 +0000 (0:00:01.107) 0:10:49.757 ******* 2026-02-20 05:06:44.692927 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:44.692932 | orchestrator | 2026-02-20 05:06:44.692936 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:06:44.692941 | orchestrator | Friday 20 February 2026 05:06:43 +0000 (0:00:01.297) 0:10:51.054 ******* 2026-02-20 05:06:44.692946 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:06:44.692950 | orchestrator | 2026-02-20 05:06:44.692955 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 05:06:44.692965 | orchestrator | Friday 20 February 2026 05:06:44 +0000 (0:00:01.114) 0:10:52.169 ******* 2026-02-20 05:07:21.888315 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-20 05:07:21.888434 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-20 05:07:21.888451 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-20 05:07:21.888463 | orchestrator | 2026-02-20 05:07:21.888476 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 05:07:21.888490 | orchestrator | Friday 20 February 2026 05:06:45 +0000 (0:00:01.306) 0:10:53.476 ******* 2026-02-20 05:07:21.888502 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-20 05:07:21.888513 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-20 05:07:21.888524 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-20 05:07:21.888534 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.888546 | orchestrator | 2026-02-20 05:07:21.888557 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 05:07:21.888568 | orchestrator | Friday 20 February 2026 05:06:46 +0000 (0:00:00.928) 0:10:54.404 ******* 2026-02-20 05:07:21.888579 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.888590 | orchestrator | 2026-02-20 05:07:21.888601 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 05:07:21.888612 | orchestrator | Friday 20 February 2026 05:06:47 +0000 (0:00:00.890) 0:10:55.295 ******* 2026-02-20 05:07:21.888622 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:07:21.888634 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-20 05:07:21.888645 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:07:21.888676 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:07:21.888687 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:07:21.888698 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:07:21.888709 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:07:21.888720 | orchestrator | 2026-02-20 05:07:21.888731 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 05:07:21.888742 | orchestrator | Friday 20 February 2026 05:06:49 +0000 (0:00:01.855) 0:10:57.151 ******* 2026-02-20 05:07:21.888753 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:07:21.888764 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-20 05:07:21.888775 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:07:21.888786 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:07:21.888797 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:07:21.888808 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:07:21.888849 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:07:21.888862 | orchestrator | 2026-02-20 05:07:21.888890 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-20 05:07:21.888905 | orchestrator | Friday 20 February 2026 05:06:51 +0000 (0:00:01.957) 0:10:59.108 ******* 2026-02-20 05:07:21.888918 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.888931 | orchestrator | 2026-02-20 05:07:21.888943 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-20 05:07:21.888956 | orchestrator | Friday 20 February 2026 05:06:52 +0000 (0:00:00.850) 0:10:59.959 ******* 2026-02-20 05:07:21.888969 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.888982 | orchestrator | 2026-02-20 05:07:21.888994 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-20 05:07:21.889008 | orchestrator | Friday 20 February 2026 05:06:53 +0000 (0:00:00.856) 0:11:00.815 ******* 2026-02-20 05:07:21.889020 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.889033 | orchestrator | 2026-02-20 05:07:21.889047 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-20 05:07:21.889059 | orchestrator | Friday 20 February 2026 05:06:54 +0000 (0:00:00.759) 0:11:01.575 ******* 2026-02-20 05:07:21.889072 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.889085 | orchestrator | 2026-02-20 05:07:21.889098 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-20 05:07:21.889111 | orchestrator | Friday 20 February 2026 05:06:55 +0000 (0:00:01.174) 0:11:02.750 ******* 2026-02-20 05:07:21.889124 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.889137 | orchestrator | 2026-02-20 05:07:21.889150 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-20 05:07:21.889226 | orchestrator | Friday 20 February 2026 05:06:56 +0000 (0:00:00.775) 0:11:03.526 ******* 2026-02-20 05:07:21.889239 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-20 05:07:21.889250 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-20 05:07:21.889261 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-20 05:07:21.889272 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.889283 | orchestrator | 2026-02-20 05:07:21.889294 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-20 05:07:21.889305 | orchestrator | Friday 20 February 2026 05:06:57 +0000 (0:00:01.029) 0:11:04.555 ******* 2026-02-20 05:07:21.889316 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-20 05:07:21.889327 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-20 05:07:21.889355 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-20 05:07:21.889367 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-20 05:07:21.889378 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-20 05:07:21.889389 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-20 05:07:21.889400 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.889411 | orchestrator | 2026-02-20 05:07:21.889422 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-20 05:07:21.889433 | orchestrator | Friday 20 February 2026 05:06:58 +0000 (0:00:01.323) 0:11:05.879 ******* 2026-02-20 05:07:21.889443 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-02-20 05:07:21.889454 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-20 05:07:21.889465 | orchestrator | 2026-02-20 05:07:21.889476 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-20 05:07:21.889487 | orchestrator | Friday 20 February 2026 05:07:01 +0000 (0:00:03.405) 0:11:09.285 ******* 2026-02-20 05:07:21.889498 | orchestrator | changed: [testbed-node-1] 2026-02-20 05:07:21.889519 | orchestrator | 2026-02-20 05:07:21.889530 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 05:07:21.889540 | orchestrator | Friday 20 February 2026 05:07:03 +0000 (0:00:02.171) 0:11:11.456 ******* 2026-02-20 05:07:21.889551 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-20 05:07:21.889563 | orchestrator | 2026-02-20 05:07:21.889574 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 05:07:21.889585 | orchestrator | Friday 20 February 2026 05:07:05 +0000 (0:00:01.116) 0:11:12.572 ******* 2026-02-20 05:07:21.889596 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-20 05:07:21.889607 | orchestrator | 2026-02-20 05:07:21.889618 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 05:07:21.889629 | orchestrator | Friday 20 February 2026 05:07:06 +0000 (0:00:01.123) 0:11:13.696 ******* 2026-02-20 05:07:21.889640 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:07:21.889659 | orchestrator | 2026-02-20 05:07:21.889678 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 05:07:21.889696 | orchestrator | Friday 20 February 2026 05:07:07 +0000 (0:00:01.541) 0:11:15.238 ******* 2026-02-20 05:07:21.889711 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.889736 | orchestrator | 2026-02-20 05:07:21.889759 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 05:07:21.889777 | orchestrator | Friday 20 February 2026 05:07:08 +0000 (0:00:01.104) 0:11:16.342 ******* 2026-02-20 05:07:21.889796 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.889814 | orchestrator | 2026-02-20 05:07:21.889832 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 05:07:21.889852 | orchestrator | Friday 20 February 2026 05:07:09 +0000 (0:00:01.114) 0:11:17.456 ******* 2026-02-20 05:07:21.889870 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.889889 | orchestrator | 2026-02-20 05:07:21.889907 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 05:07:21.889936 | orchestrator | Friday 20 February 2026 05:07:11 +0000 (0:00:01.159) 0:11:18.616 ******* 2026-02-20 05:07:21.889956 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:07:21.889975 | orchestrator | 2026-02-20 05:07:21.889993 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 05:07:21.890094 | orchestrator | Friday 20 February 2026 05:07:12 +0000 (0:00:01.560) 0:11:20.177 ******* 2026-02-20 05:07:21.890120 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.890139 | orchestrator | 2026-02-20 05:07:21.890184 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 05:07:21.890203 | orchestrator | Friday 20 February 2026 05:07:13 +0000 (0:00:01.106) 0:11:21.284 ******* 2026-02-20 05:07:21.890222 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.890241 | orchestrator | 2026-02-20 05:07:21.890260 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 05:07:21.890279 | orchestrator | Friday 20 February 2026 05:07:14 +0000 (0:00:01.131) 0:11:22.416 ******* 2026-02-20 05:07:21.890298 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:07:21.890314 | orchestrator | 2026-02-20 05:07:21.890326 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 05:07:21.890336 | orchestrator | Friday 20 February 2026 05:07:16 +0000 (0:00:01.536) 0:11:23.952 ******* 2026-02-20 05:07:21.890347 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:07:21.890358 | orchestrator | 2026-02-20 05:07:21.890370 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 05:07:21.890381 | orchestrator | Friday 20 February 2026 05:07:18 +0000 (0:00:01.560) 0:11:25.513 ******* 2026-02-20 05:07:21.890392 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.890403 | orchestrator | 2026-02-20 05:07:21.890414 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 05:07:21.890425 | orchestrator | Friday 20 February 2026 05:07:18 +0000 (0:00:00.778) 0:11:26.291 ******* 2026-02-20 05:07:21.890448 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:07:21.890460 | orchestrator | 2026-02-20 05:07:21.890471 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 05:07:21.890482 | orchestrator | Friday 20 February 2026 05:07:19 +0000 (0:00:00.773) 0:11:27.065 ******* 2026-02-20 05:07:21.890493 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.890504 | orchestrator | 2026-02-20 05:07:21.890515 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 05:07:21.890526 | orchestrator | Friday 20 February 2026 05:07:20 +0000 (0:00:00.790) 0:11:27.856 ******* 2026-02-20 05:07:21.890537 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:07:21.890548 | orchestrator | 2026-02-20 05:07:21.890559 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 05:07:21.890570 | orchestrator | Friday 20 February 2026 05:07:21 +0000 (0:00:00.749) 0:11:28.606 ******* 2026-02-20 05:07:21.890597 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856456 | orchestrator | 2026-02-20 05:08:01.856539 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 05:08:01.856547 | orchestrator | Friday 20 February 2026 05:07:21 +0000 (0:00:00.753) 0:11:29.360 ******* 2026-02-20 05:08:01.856552 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856558 | orchestrator | 2026-02-20 05:08:01.856562 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 05:08:01.856566 | orchestrator | Friday 20 February 2026 05:07:22 +0000 (0:00:00.791) 0:11:30.151 ******* 2026-02-20 05:08:01.856571 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856575 | orchestrator | 2026-02-20 05:08:01.856579 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 05:08:01.856583 | orchestrator | Friday 20 February 2026 05:07:23 +0000 (0:00:00.807) 0:11:30.959 ******* 2026-02-20 05:08:01.856587 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:08:01.856592 | orchestrator | 2026-02-20 05:08:01.856596 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 05:08:01.856600 | orchestrator | Friday 20 February 2026 05:07:24 +0000 (0:00:00.784) 0:11:31.744 ******* 2026-02-20 05:08:01.856604 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:08:01.856608 | orchestrator | 2026-02-20 05:08:01.856612 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 05:08:01.856615 | orchestrator | Friday 20 February 2026 05:07:25 +0000 (0:00:00.783) 0:11:32.527 ******* 2026-02-20 05:08:01.856619 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:08:01.856623 | orchestrator | 2026-02-20 05:08:01.856627 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 05:08:01.856631 | orchestrator | Friday 20 February 2026 05:07:25 +0000 (0:00:00.797) 0:11:33.324 ******* 2026-02-20 05:08:01.856635 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856639 | orchestrator | 2026-02-20 05:08:01.856643 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 05:08:01.856647 | orchestrator | Friday 20 February 2026 05:07:26 +0000 (0:00:00.759) 0:11:34.084 ******* 2026-02-20 05:08:01.856651 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856655 | orchestrator | 2026-02-20 05:08:01.856659 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 05:08:01.856663 | orchestrator | Friday 20 February 2026 05:07:27 +0000 (0:00:00.766) 0:11:34.851 ******* 2026-02-20 05:08:01.856667 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856671 | orchestrator | 2026-02-20 05:08:01.856674 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 05:08:01.856678 | orchestrator | Friday 20 February 2026 05:07:28 +0000 (0:00:00.770) 0:11:35.622 ******* 2026-02-20 05:08:01.856682 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856686 | orchestrator | 2026-02-20 05:08:01.856690 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 05:08:01.856694 | orchestrator | Friday 20 February 2026 05:07:28 +0000 (0:00:00.754) 0:11:36.376 ******* 2026-02-20 05:08:01.856712 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856716 | orchestrator | 2026-02-20 05:08:01.856720 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 05:08:01.856724 | orchestrator | Friday 20 February 2026 05:07:29 +0000 (0:00:00.778) 0:11:37.155 ******* 2026-02-20 05:08:01.856728 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856732 | orchestrator | 2026-02-20 05:08:01.856745 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 05:08:01.856749 | orchestrator | Friday 20 February 2026 05:07:30 +0000 (0:00:00.766) 0:11:37.921 ******* 2026-02-20 05:08:01.856753 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856757 | orchestrator | 2026-02-20 05:08:01.856761 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 05:08:01.856765 | orchestrator | Friday 20 February 2026 05:07:31 +0000 (0:00:00.754) 0:11:38.676 ******* 2026-02-20 05:08:01.856769 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856773 | orchestrator | 2026-02-20 05:08:01.856777 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 05:08:01.856780 | orchestrator | Friday 20 February 2026 05:07:31 +0000 (0:00:00.779) 0:11:39.456 ******* 2026-02-20 05:08:01.856784 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856788 | orchestrator | 2026-02-20 05:08:01.856792 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 05:08:01.856796 | orchestrator | Friday 20 February 2026 05:07:32 +0000 (0:00:00.770) 0:11:40.227 ******* 2026-02-20 05:08:01.856800 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856804 | orchestrator | 2026-02-20 05:08:01.856808 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 05:08:01.856812 | orchestrator | Friday 20 February 2026 05:07:33 +0000 (0:00:00.769) 0:11:40.996 ******* 2026-02-20 05:08:01.856815 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856819 | orchestrator | 2026-02-20 05:08:01.856823 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 05:08:01.856827 | orchestrator | Friday 20 February 2026 05:07:34 +0000 (0:00:00.765) 0:11:41.762 ******* 2026-02-20 05:08:01.856831 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856835 | orchestrator | 2026-02-20 05:08:01.856838 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 05:08:01.856842 | orchestrator | Friday 20 February 2026 05:07:35 +0000 (0:00:00.795) 0:11:42.557 ******* 2026-02-20 05:08:01.856846 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:08:01.856850 | orchestrator | 2026-02-20 05:08:01.856854 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 05:08:01.856858 | orchestrator | Friday 20 February 2026 05:07:36 +0000 (0:00:01.572) 0:11:44.129 ******* 2026-02-20 05:08:01.856861 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:08:01.856865 | orchestrator | 2026-02-20 05:08:01.856869 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 05:08:01.856873 | orchestrator | Friday 20 February 2026 05:07:38 +0000 (0:00:02.125) 0:11:46.255 ******* 2026-02-20 05:08:01.856877 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-20 05:08:01.856882 | orchestrator | 2026-02-20 05:08:01.856894 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-20 05:08:01.856898 | orchestrator | Friday 20 February 2026 05:07:39 +0000 (0:00:01.127) 0:11:47.383 ******* 2026-02-20 05:08:01.856902 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856906 | orchestrator | 2026-02-20 05:08:01.856910 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-20 05:08:01.856914 | orchestrator | Friday 20 February 2026 05:07:41 +0000 (0:00:01.214) 0:11:48.597 ******* 2026-02-20 05:08:01.856918 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856922 | orchestrator | 2026-02-20 05:08:01.856925 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-20 05:08:01.856933 | orchestrator | Friday 20 February 2026 05:07:42 +0000 (0:00:01.134) 0:11:49.732 ******* 2026-02-20 05:08:01.856937 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 05:08:01.856941 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 05:08:01.856944 | orchestrator | 2026-02-20 05:08:01.856948 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-20 05:08:01.856952 | orchestrator | Friday 20 February 2026 05:07:44 +0000 (0:00:01.828) 0:11:51.560 ******* 2026-02-20 05:08:01.856956 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:08:01.856960 | orchestrator | 2026-02-20 05:08:01.856964 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-20 05:08:01.856967 | orchestrator | Friday 20 February 2026 05:07:45 +0000 (0:00:01.485) 0:11:53.046 ******* 2026-02-20 05:08:01.856971 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856975 | orchestrator | 2026-02-20 05:08:01.856979 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-20 05:08:01.856983 | orchestrator | Friday 20 February 2026 05:07:46 +0000 (0:00:01.120) 0:11:54.166 ******* 2026-02-20 05:08:01.856987 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.856991 | orchestrator | 2026-02-20 05:08:01.856994 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 05:08:01.856998 | orchestrator | Friday 20 February 2026 05:07:47 +0000 (0:00:00.761) 0:11:54.928 ******* 2026-02-20 05:08:01.857002 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.857006 | orchestrator | 2026-02-20 05:08:01.857010 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 05:08:01.857014 | orchestrator | Friday 20 February 2026 05:07:48 +0000 (0:00:00.757) 0:11:55.686 ******* 2026-02-20 05:08:01.857018 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-20 05:08:01.857023 | orchestrator | 2026-02-20 05:08:01.857028 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-20 05:08:01.857032 | orchestrator | Friday 20 February 2026 05:07:49 +0000 (0:00:01.093) 0:11:56.779 ******* 2026-02-20 05:08:01.857037 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:08:01.857041 | orchestrator | 2026-02-20 05:08:01.857046 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-20 05:08:01.857051 | orchestrator | Friday 20 February 2026 05:07:51 +0000 (0:00:01.797) 0:11:58.577 ******* 2026-02-20 05:08:01.857056 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 05:08:01.857063 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 05:08:01.857068 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 05:08:01.857073 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.857077 | orchestrator | 2026-02-20 05:08:01.857082 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-20 05:08:01.857086 | orchestrator | Friday 20 February 2026 05:07:52 +0000 (0:00:01.132) 0:11:59.709 ******* 2026-02-20 05:08:01.857091 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.857095 | orchestrator | 2026-02-20 05:08:01.857100 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-20 05:08:01.857104 | orchestrator | Friday 20 February 2026 05:07:53 +0000 (0:00:01.162) 0:12:00.872 ******* 2026-02-20 05:08:01.857108 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.857113 | orchestrator | 2026-02-20 05:08:01.857117 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-20 05:08:01.857121 | orchestrator | Friday 20 February 2026 05:07:54 +0000 (0:00:01.185) 0:12:02.058 ******* 2026-02-20 05:08:01.857126 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.857130 | orchestrator | 2026-02-20 05:08:01.857136 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-20 05:08:01.857140 | orchestrator | Friday 20 February 2026 05:07:55 +0000 (0:00:01.136) 0:12:03.194 ******* 2026-02-20 05:08:01.857148 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.857152 | orchestrator | 2026-02-20 05:08:01.857157 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-20 05:08:01.857162 | orchestrator | Friday 20 February 2026 05:07:56 +0000 (0:00:01.116) 0:12:04.311 ******* 2026-02-20 05:08:01.857166 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:01.857171 | orchestrator | 2026-02-20 05:08:01.857175 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 05:08:01.857180 | orchestrator | Friday 20 February 2026 05:07:57 +0000 (0:00:00.764) 0:12:05.076 ******* 2026-02-20 05:08:01.857184 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:08:01.857189 | orchestrator | 2026-02-20 05:08:01.857193 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 05:08:01.857198 | orchestrator | Friday 20 February 2026 05:07:59 +0000 (0:00:02.228) 0:12:07.305 ******* 2026-02-20 05:08:01.857202 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:08:01.857207 | orchestrator | 2026-02-20 05:08:01.857228 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 05:08:01.857233 | orchestrator | Friday 20 February 2026 05:08:00 +0000 (0:00:00.851) 0:12:08.156 ******* 2026-02-20 05:08:01.857238 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-20 05:08:01.857242 | orchestrator | 2026-02-20 05:08:01.857249 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-20 05:08:37.770340 | orchestrator | Friday 20 February 2026 05:08:01 +0000 (0:00:01.169) 0:12:09.325 ******* 2026-02-20 05:08:37.770459 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.770479 | orchestrator | 2026-02-20 05:08:37.770492 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-20 05:08:37.770504 | orchestrator | Friday 20 February 2026 05:08:03 +0000 (0:00:01.166) 0:12:10.492 ******* 2026-02-20 05:08:37.770515 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.770527 | orchestrator | 2026-02-20 05:08:37.770539 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-20 05:08:37.770550 | orchestrator | Friday 20 February 2026 05:08:04 +0000 (0:00:01.169) 0:12:11.661 ******* 2026-02-20 05:08:37.770561 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.770572 | orchestrator | 2026-02-20 05:08:37.770583 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-20 05:08:37.770594 | orchestrator | Friday 20 February 2026 05:08:05 +0000 (0:00:01.122) 0:12:12.784 ******* 2026-02-20 05:08:37.770605 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.770616 | orchestrator | 2026-02-20 05:08:37.770628 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-20 05:08:37.770641 | orchestrator | Friday 20 February 2026 05:08:06 +0000 (0:00:01.116) 0:12:13.901 ******* 2026-02-20 05:08:37.770660 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.770677 | orchestrator | 2026-02-20 05:08:37.770694 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-20 05:08:37.770713 | orchestrator | Friday 20 February 2026 05:08:07 +0000 (0:00:01.121) 0:12:15.022 ******* 2026-02-20 05:08:37.770733 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.770749 | orchestrator | 2026-02-20 05:08:37.770760 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-20 05:08:37.770771 | orchestrator | Friday 20 February 2026 05:08:08 +0000 (0:00:01.116) 0:12:16.139 ******* 2026-02-20 05:08:37.770783 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.770794 | orchestrator | 2026-02-20 05:08:37.770805 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-20 05:08:37.770816 | orchestrator | Friday 20 February 2026 05:08:09 +0000 (0:00:01.135) 0:12:17.275 ******* 2026-02-20 05:08:37.770827 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.770857 | orchestrator | 2026-02-20 05:08:37.770870 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-20 05:08:37.770921 | orchestrator | Friday 20 February 2026 05:08:10 +0000 (0:00:01.141) 0:12:18.417 ******* 2026-02-20 05:08:37.770933 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:08:37.770945 | orchestrator | 2026-02-20 05:08:37.770956 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 05:08:37.770967 | orchestrator | Friday 20 February 2026 05:08:11 +0000 (0:00:00.799) 0:12:19.216 ******* 2026-02-20 05:08:37.770977 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-20 05:08:37.770990 | orchestrator | 2026-02-20 05:08:37.771001 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-20 05:08:37.771012 | orchestrator | Friday 20 February 2026 05:08:12 +0000 (0:00:01.103) 0:12:20.320 ******* 2026-02-20 05:08:37.771023 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-20 05:08:37.771048 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-20 05:08:37.771060 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-20 05:08:37.771071 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-20 05:08:37.771081 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-20 05:08:37.771092 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-20 05:08:37.771103 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-20 05:08:37.771114 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-20 05:08:37.771125 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 05:08:37.771135 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 05:08:37.771146 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 05:08:37.771157 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 05:08:37.771168 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 05:08:37.771179 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 05:08:37.771189 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-20 05:08:37.771200 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-20 05:08:37.771211 | orchestrator | 2026-02-20 05:08:37.771222 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 05:08:37.771233 | orchestrator | Friday 20 February 2026 05:08:19 +0000 (0:00:06.483) 0:12:26.804 ******* 2026-02-20 05:08:37.771244 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.771255 | orchestrator | 2026-02-20 05:08:37.771448 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 05:08:37.771467 | orchestrator | Friday 20 February 2026 05:08:20 +0000 (0:00:00.777) 0:12:27.582 ******* 2026-02-20 05:08:37.771478 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.771490 | orchestrator | 2026-02-20 05:08:37.771501 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 05:08:37.771512 | orchestrator | Friday 20 February 2026 05:08:20 +0000 (0:00:00.765) 0:12:28.347 ******* 2026-02-20 05:08:37.771522 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.771533 | orchestrator | 2026-02-20 05:08:37.771545 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 05:08:37.771556 | orchestrator | Friday 20 February 2026 05:08:21 +0000 (0:00:00.752) 0:12:29.100 ******* 2026-02-20 05:08:37.771567 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.771578 | orchestrator | 2026-02-20 05:08:37.771589 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 05:08:37.771622 | orchestrator | Friday 20 February 2026 05:08:22 +0000 (0:00:00.798) 0:12:29.898 ******* 2026-02-20 05:08:37.771633 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.771644 | orchestrator | 2026-02-20 05:08:37.771655 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 05:08:37.771666 | orchestrator | Friday 20 February 2026 05:08:23 +0000 (0:00:00.745) 0:12:30.644 ******* 2026-02-20 05:08:37.771690 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.771718 | orchestrator | 2026-02-20 05:08:37.771729 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 05:08:37.771741 | orchestrator | Friday 20 February 2026 05:08:23 +0000 (0:00:00.745) 0:12:31.390 ******* 2026-02-20 05:08:37.771752 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.771763 | orchestrator | 2026-02-20 05:08:37.771774 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 05:08:37.771785 | orchestrator | Friday 20 February 2026 05:08:24 +0000 (0:00:00.761) 0:12:32.151 ******* 2026-02-20 05:08:37.771796 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.771807 | orchestrator | 2026-02-20 05:08:37.771818 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 05:08:37.771829 | orchestrator | Friday 20 February 2026 05:08:25 +0000 (0:00:00.757) 0:12:32.909 ******* 2026-02-20 05:08:37.771840 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.771851 | orchestrator | 2026-02-20 05:08:37.771862 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 05:08:37.771872 | orchestrator | Friday 20 February 2026 05:08:26 +0000 (0:00:00.768) 0:12:33.678 ******* 2026-02-20 05:08:37.771884 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.771895 | orchestrator | 2026-02-20 05:08:37.771905 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 05:08:37.771916 | orchestrator | Friday 20 February 2026 05:08:26 +0000 (0:00:00.758) 0:12:34.436 ******* 2026-02-20 05:08:37.771927 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.771938 | orchestrator | 2026-02-20 05:08:37.771949 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 05:08:37.771960 | orchestrator | Friday 20 February 2026 05:08:27 +0000 (0:00:00.762) 0:12:35.199 ******* 2026-02-20 05:08:37.771971 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.771982 | orchestrator | 2026-02-20 05:08:37.771993 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 05:08:37.772004 | orchestrator | Friday 20 February 2026 05:08:28 +0000 (0:00:00.792) 0:12:35.991 ******* 2026-02-20 05:08:37.772015 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.772026 | orchestrator | 2026-02-20 05:08:37.772037 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 05:08:37.772047 | orchestrator | Friday 20 February 2026 05:08:29 +0000 (0:00:00.863) 0:12:36.855 ******* 2026-02-20 05:08:37.772058 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.772070 | orchestrator | 2026-02-20 05:08:37.772080 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 05:08:37.772091 | orchestrator | Friday 20 February 2026 05:08:30 +0000 (0:00:00.767) 0:12:37.623 ******* 2026-02-20 05:08:37.772102 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.772113 | orchestrator | 2026-02-20 05:08:37.772133 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 05:08:37.772144 | orchestrator | Friday 20 February 2026 05:08:31 +0000 (0:00:00.884) 0:12:38.507 ******* 2026-02-20 05:08:37.772155 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.772166 | orchestrator | 2026-02-20 05:08:37.772177 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 05:08:37.772188 | orchestrator | Friday 20 February 2026 05:08:31 +0000 (0:00:00.792) 0:12:39.300 ******* 2026-02-20 05:08:37.772199 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.772210 | orchestrator | 2026-02-20 05:08:37.772222 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:08:37.772234 | orchestrator | Friday 20 February 2026 05:08:32 +0000 (0:00:00.785) 0:12:40.085 ******* 2026-02-20 05:08:37.772245 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.772281 | orchestrator | 2026-02-20 05:08:37.772293 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:08:37.772304 | orchestrator | Friday 20 February 2026 05:08:33 +0000 (0:00:00.788) 0:12:40.874 ******* 2026-02-20 05:08:37.772315 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.772327 | orchestrator | 2026-02-20 05:08:37.772337 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:08:37.772349 | orchestrator | Friday 20 February 2026 05:08:34 +0000 (0:00:00.801) 0:12:41.675 ******* 2026-02-20 05:08:37.772360 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.772371 | orchestrator | 2026-02-20 05:08:37.772382 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:08:37.772393 | orchestrator | Friday 20 February 2026 05:08:34 +0000 (0:00:00.765) 0:12:42.441 ******* 2026-02-20 05:08:37.772404 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.772415 | orchestrator | 2026-02-20 05:08:37.772426 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:08:37.772437 | orchestrator | Friday 20 February 2026 05:08:35 +0000 (0:00:00.765) 0:12:43.207 ******* 2026-02-20 05:08:37.772448 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-20 05:08:37.772459 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-20 05:08:37.772470 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-20 05:08:37.772482 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:08:37.772493 | orchestrator | 2026-02-20 05:08:37.772504 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:08:37.772515 | orchestrator | Friday 20 February 2026 05:08:36 +0000 (0:00:01.028) 0:12:44.235 ******* 2026-02-20 05:08:37.772526 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-20 05:08:37.772543 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-20 05:10:04.366318 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-20 05:10:04.366498 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:10:04.366515 | orchestrator | 2026-02-20 05:10:04.366527 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:10:04.366541 | orchestrator | Friday 20 February 2026 05:08:37 +0000 (0:00:01.008) 0:12:45.244 ******* 2026-02-20 05:10:04.366552 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-20 05:10:04.366563 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-20 05:10:04.366574 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-20 05:10:04.366585 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:10:04.366596 | orchestrator | 2026-02-20 05:10:04.366607 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:10:04.366617 | orchestrator | Friday 20 February 2026 05:08:38 +0000 (0:00:01.026) 0:12:46.271 ******* 2026-02-20 05:10:04.366627 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:10:04.366639 | orchestrator | 2026-02-20 05:10:04.366650 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:10:04.366660 | orchestrator | Friday 20 February 2026 05:08:39 +0000 (0:00:00.753) 0:12:47.024 ******* 2026-02-20 05:10:04.366672 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-20 05:10:04.366682 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:10:04.366693 | orchestrator | 2026-02-20 05:10:04.366703 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 05:10:04.366714 | orchestrator | Friday 20 February 2026 05:08:40 +0000 (0:00:00.999) 0:12:48.024 ******* 2026-02-20 05:10:04.366724 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:04.366735 | orchestrator | 2026-02-20 05:10:04.366746 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-20 05:10:04.366756 | orchestrator | Friday 20 February 2026 05:08:41 +0000 (0:00:01.385) 0:12:49.409 ******* 2026-02-20 05:10:04.366767 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:04.366777 | orchestrator | 2026-02-20 05:10:04.366810 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-20 05:10:04.366823 | orchestrator | Friday 20 February 2026 05:08:42 +0000 (0:00:00.800) 0:12:50.209 ******* 2026-02-20 05:10:04.366834 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-02-20 05:10:04.366845 | orchestrator | 2026-02-20 05:10:04.366855 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-20 05:10:04.366866 | orchestrator | Friday 20 February 2026 05:08:43 +0000 (0:00:01.109) 0:12:51.319 ******* 2026-02-20 05:10:04.366878 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-02-20 05:10:04.366889 | orchestrator | 2026-02-20 05:10:04.366900 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-20 05:10:04.366911 | orchestrator | Friday 20 February 2026 05:08:47 +0000 (0:00:03.296) 0:12:54.615 ******* 2026-02-20 05:10:04.366922 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:10:04.366933 | orchestrator | 2026-02-20 05:10:04.366943 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-20 05:10:04.366969 | orchestrator | Friday 20 February 2026 05:08:48 +0000 (0:00:01.154) 0:12:55.770 ******* 2026-02-20 05:10:04.366981 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:04.366992 | orchestrator | 2026-02-20 05:10:04.367002 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-20 05:10:04.367013 | orchestrator | Friday 20 February 2026 05:08:49 +0000 (0:00:01.122) 0:12:56.893 ******* 2026-02-20 05:10:04.367024 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:04.367034 | orchestrator | 2026-02-20 05:10:04.367045 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-20 05:10:04.367057 | orchestrator | Friday 20 February 2026 05:08:50 +0000 (0:00:01.173) 0:12:58.066 ******* 2026-02-20 05:10:04.367068 | orchestrator | changed: [testbed-node-1] 2026-02-20 05:10:04.367078 | orchestrator | 2026-02-20 05:10:04.367089 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-20 05:10:04.367101 | orchestrator | Friday 20 February 2026 05:08:52 +0000 (0:00:02.032) 0:13:00.098 ******* 2026-02-20 05:10:04.367113 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:04.367124 | orchestrator | 2026-02-20 05:10:04.367134 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-20 05:10:04.367144 | orchestrator | Friday 20 February 2026 05:08:54 +0000 (0:00:01.651) 0:13:01.750 ******* 2026-02-20 05:10:04.367151 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:04.367159 | orchestrator | 2026-02-20 05:10:04.367166 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-20 05:10:04.367173 | orchestrator | Friday 20 February 2026 05:08:55 +0000 (0:00:01.532) 0:13:03.283 ******* 2026-02-20 05:10:04.367181 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:04.367188 | orchestrator | 2026-02-20 05:10:04.367195 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-20 05:10:04.367202 | orchestrator | Friday 20 February 2026 05:08:57 +0000 (0:00:01.501) 0:13:04.784 ******* 2026-02-20 05:10:04.367210 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:10:04.367217 | orchestrator | 2026-02-20 05:10:04.367224 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-20 05:10:04.367231 | orchestrator | Friday 20 February 2026 05:08:58 +0000 (0:00:01.556) 0:13:06.340 ******* 2026-02-20 05:10:04.367238 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:10:04.367246 | orchestrator | 2026-02-20 05:10:04.367253 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-20 05:10:04.367260 | orchestrator | Friday 20 February 2026 05:09:00 +0000 (0:00:01.566) 0:13:07.907 ******* 2026-02-20 05:10:04.367268 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 05:10:04.367275 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-20 05:10:04.367283 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-20 05:10:04.367297 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-20 05:10:04.367304 | orchestrator | 2026-02-20 05:10:04.367325 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-20 05:10:04.367332 | orchestrator | Friday 20 February 2026 05:09:04 +0000 (0:00:03.964) 0:13:11.872 ******* 2026-02-20 05:10:04.367342 | orchestrator | changed: [testbed-node-1] 2026-02-20 05:10:04.367353 | orchestrator | 2026-02-20 05:10:04.367382 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-20 05:10:04.367394 | orchestrator | Friday 20 February 2026 05:09:06 +0000 (0:00:02.137) 0:13:14.010 ******* 2026-02-20 05:10:04.367405 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:04.367417 | orchestrator | 2026-02-20 05:10:04.367428 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-20 05:10:04.367439 | orchestrator | Friday 20 February 2026 05:09:07 +0000 (0:00:01.115) 0:13:15.125 ******* 2026-02-20 05:10:04.367449 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:04.367457 | orchestrator | 2026-02-20 05:10:04.367465 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-20 05:10:04.367473 | orchestrator | Friday 20 February 2026 05:09:08 +0000 (0:00:01.135) 0:13:16.261 ******* 2026-02-20 05:10:04.367480 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:04.367488 | orchestrator | 2026-02-20 05:10:04.367495 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-20 05:10:04.367502 | orchestrator | Friday 20 February 2026 05:09:10 +0000 (0:00:01.777) 0:13:18.038 ******* 2026-02-20 05:10:04.367509 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:04.367517 | orchestrator | 2026-02-20 05:10:04.367524 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-20 05:10:04.367532 | orchestrator | Friday 20 February 2026 05:09:12 +0000 (0:00:01.448) 0:13:19.487 ******* 2026-02-20 05:10:04.367539 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:10:04.367547 | orchestrator | 2026-02-20 05:10:04.367554 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-20 05:10:04.367562 | orchestrator | Friday 20 February 2026 05:09:12 +0000 (0:00:00.782) 0:13:20.269 ******* 2026-02-20 05:10:04.367569 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-02-20 05:10:04.367576 | orchestrator | 2026-02-20 05:10:04.367584 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-20 05:10:04.367591 | orchestrator | Friday 20 February 2026 05:09:13 +0000 (0:00:01.103) 0:13:21.374 ******* 2026-02-20 05:10:04.367599 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:10:04.367606 | orchestrator | 2026-02-20 05:10:04.367614 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-20 05:10:04.367621 | orchestrator | Friday 20 February 2026 05:09:15 +0000 (0:00:01.114) 0:13:22.488 ******* 2026-02-20 05:10:04.367629 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:10:04.367636 | orchestrator | 2026-02-20 05:10:04.367643 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-20 05:10:04.367651 | orchestrator | Friday 20 February 2026 05:09:16 +0000 (0:00:01.092) 0:13:23.581 ******* 2026-02-20 05:10:04.367658 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-02-20 05:10:04.367666 | orchestrator | 2026-02-20 05:10:04.367673 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-20 05:10:04.367686 | orchestrator | Friday 20 February 2026 05:09:17 +0000 (0:00:01.099) 0:13:24.680 ******* 2026-02-20 05:10:04.367694 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:04.367701 | orchestrator | 2026-02-20 05:10:04.367709 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-20 05:10:04.367716 | orchestrator | Friday 20 February 2026 05:09:19 +0000 (0:00:02.260) 0:13:26.941 ******* 2026-02-20 05:10:04.367723 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:04.367731 | orchestrator | 2026-02-20 05:10:04.367738 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-20 05:10:04.367752 | orchestrator | Friday 20 February 2026 05:09:21 +0000 (0:00:01.943) 0:13:28.884 ******* 2026-02-20 05:10:04.367760 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:04.367767 | orchestrator | 2026-02-20 05:10:04.367775 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-20 05:10:04.367782 | orchestrator | Friday 20 February 2026 05:09:23 +0000 (0:00:02.461) 0:13:31.346 ******* 2026-02-20 05:10:04.367790 | orchestrator | changed: [testbed-node-1] 2026-02-20 05:10:04.367798 | orchestrator | 2026-02-20 05:10:04.367809 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-20 05:10:04.367820 | orchestrator | Friday 20 February 2026 05:09:26 +0000 (0:00:03.121) 0:13:34.468 ******* 2026-02-20 05:10:04.367831 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-02-20 05:10:04.367841 | orchestrator | 2026-02-20 05:10:04.367851 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-20 05:10:04.367860 | orchestrator | Friday 20 February 2026 05:09:28 +0000 (0:00:01.108) 0:13:35.576 ******* 2026-02-20 05:10:04.367871 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-20 05:10:04.367882 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:04.367893 | orchestrator | 2026-02-20 05:10:04.367905 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-20 05:10:04.367916 | orchestrator | Friday 20 February 2026 05:09:51 +0000 (0:00:22.928) 0:13:58.505 ******* 2026-02-20 05:10:04.367927 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:04.367935 | orchestrator | 2026-02-20 05:10:04.367942 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-20 05:10:04.367949 | orchestrator | Friday 20 February 2026 05:09:53 +0000 (0:00:02.743) 0:14:01.249 ******* 2026-02-20 05:10:04.367957 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:10:04.367964 | orchestrator | 2026-02-20 05:10:04.367971 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-20 05:10:04.367978 | orchestrator | Friday 20 February 2026 05:09:54 +0000 (0:00:00.768) 0:14:02.018 ******* 2026-02-20 05:10:04.367994 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-20 05:10:35.447610 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-20 05:10:35.447746 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-20 05:10:35.447770 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-20 05:10:35.447787 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-20 05:10:35.447853 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}])  2026-02-20 05:10:35.447872 | orchestrator | 2026-02-20 05:10:35.447882 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-20 05:10:35.447896 | orchestrator | Friday 20 February 2026 05:10:04 +0000 (0:00:09.813) 0:14:11.831 ******* 2026-02-20 05:10:35.447910 | orchestrator | changed: [testbed-node-1] 2026-02-20 05:10:35.447933 | orchestrator | 2026-02-20 05:10:35.447949 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 05:10:35.447964 | orchestrator | Friday 20 February 2026 05:10:06 +0000 (0:00:02.593) 0:14:14.425 ******* 2026-02-20 05:10:35.447978 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:10:35.447991 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-20 05:10:35.448003 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-20 05:10:35.448016 | orchestrator | 2026-02-20 05:10:35.448029 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 05:10:35.448042 | orchestrator | Friday 20 February 2026 05:10:08 +0000 (0:00:01.507) 0:14:15.932 ******* 2026-02-20 05:10:35.448055 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-20 05:10:35.448069 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-20 05:10:35.448082 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-20 05:10:35.448096 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:10:35.448110 | orchestrator | 2026-02-20 05:10:35.448126 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-20 05:10:35.448141 | orchestrator | Friday 20 February 2026 05:10:09 +0000 (0:00:01.063) 0:14:16.996 ******* 2026-02-20 05:10:35.448158 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:10:35.448174 | orchestrator | 2026-02-20 05:10:35.448189 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-20 05:10:35.448204 | orchestrator | Friday 20 February 2026 05:10:10 +0000 (0:00:00.785) 0:14:17.782 ******* 2026-02-20 05:10:35.448218 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:10:35.448234 | orchestrator | 2026-02-20 05:10:35.448249 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-20 05:10:35.448265 | orchestrator | 2026-02-20 05:10:35.448280 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-20 05:10:35.448296 | orchestrator | Friday 20 February 2026 05:10:12 +0000 (0:00:02.267) 0:14:20.049 ******* 2026-02-20 05:10:35.448312 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:10:35.448327 | orchestrator | 2026-02-20 05:10:35.448343 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-20 05:10:35.448358 | orchestrator | Friday 20 February 2026 05:10:13 +0000 (0:00:01.114) 0:14:21.163 ******* 2026-02-20 05:10:35.448372 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:10:35.448387 | orchestrator | 2026-02-20 05:10:35.448426 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-20 05:10:35.448442 | orchestrator | Friday 20 February 2026 05:10:14 +0000 (0:00:00.779) 0:14:21.943 ******* 2026-02-20 05:10:35.448457 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:10:35.448474 | orchestrator | 2026-02-20 05:10:35.448516 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-20 05:10:35.448532 | orchestrator | Friday 20 February 2026 05:10:15 +0000 (0:00:00.807) 0:14:22.750 ******* 2026-02-20 05:10:35.448548 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:10:35.448576 | orchestrator | 2026-02-20 05:10:35.448585 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:10:35.448594 | orchestrator | Friday 20 February 2026 05:10:16 +0000 (0:00:00.762) 0:14:23.512 ******* 2026-02-20 05:10:35.448603 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-20 05:10:35.448612 | orchestrator | 2026-02-20 05:10:35.448620 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 05:10:35.448634 | orchestrator | Friday 20 February 2026 05:10:17 +0000 (0:00:01.085) 0:14:24.598 ******* 2026-02-20 05:10:35.448648 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:10:35.448663 | orchestrator | 2026-02-20 05:10:35.448677 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 05:10:35.448692 | orchestrator | Friday 20 February 2026 05:10:18 +0000 (0:00:01.534) 0:14:26.132 ******* 2026-02-20 05:10:35.448707 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:10:35.448722 | orchestrator | 2026-02-20 05:10:35.448737 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:10:35.448751 | orchestrator | Friday 20 February 2026 05:10:19 +0000 (0:00:01.090) 0:14:27.222 ******* 2026-02-20 05:10:35.448760 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:10:35.448769 | orchestrator | 2026-02-20 05:10:35.448777 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:10:35.448786 | orchestrator | Friday 20 February 2026 05:10:21 +0000 (0:00:01.426) 0:14:28.648 ******* 2026-02-20 05:10:35.448795 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:10:35.448804 | orchestrator | 2026-02-20 05:10:35.448813 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 05:10:35.448822 | orchestrator | Friday 20 February 2026 05:10:22 +0000 (0:00:01.086) 0:14:29.735 ******* 2026-02-20 05:10:35.448831 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:10:35.448840 | orchestrator | 2026-02-20 05:10:35.448849 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 05:10:35.448858 | orchestrator | Friday 20 February 2026 05:10:23 +0000 (0:00:01.104) 0:14:30.840 ******* 2026-02-20 05:10:35.448866 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:10:35.448875 | orchestrator | 2026-02-20 05:10:35.448884 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 05:10:35.448893 | orchestrator | Friday 20 February 2026 05:10:24 +0000 (0:00:01.098) 0:14:31.939 ******* 2026-02-20 05:10:35.448902 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:10:35.448911 | orchestrator | 2026-02-20 05:10:35.448919 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 05:10:35.448928 | orchestrator | Friday 20 February 2026 05:10:25 +0000 (0:00:01.066) 0:14:33.006 ******* 2026-02-20 05:10:35.448945 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:10:35.448954 | orchestrator | 2026-02-20 05:10:35.448963 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 05:10:35.448972 | orchestrator | Friday 20 February 2026 05:10:26 +0000 (0:00:01.001) 0:14:34.007 ******* 2026-02-20 05:10:35.448981 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:10:35.448990 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:10:35.448999 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-20 05:10:35.449008 | orchestrator | 2026-02-20 05:10:35.449018 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 05:10:35.449033 | orchestrator | Friday 20 February 2026 05:10:28 +0000 (0:00:01.669) 0:14:35.676 ******* 2026-02-20 05:10:35.449057 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:10:35.449071 | orchestrator | 2026-02-20 05:10:35.449085 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 05:10:35.449099 | orchestrator | Friday 20 February 2026 05:10:29 +0000 (0:00:01.011) 0:14:36.688 ******* 2026-02-20 05:10:35.449113 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:10:35.449137 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:10:35.449152 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-20 05:10:35.449166 | orchestrator | 2026-02-20 05:10:35.449179 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 05:10:35.449194 | orchestrator | Friday 20 February 2026 05:10:32 +0000 (0:00:02.934) 0:14:39.623 ******* 2026-02-20 05:10:35.449208 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-20 05:10:35.449224 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-20 05:10:35.449236 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-20 05:10:35.449249 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:10:35.449264 | orchestrator | 2026-02-20 05:10:35.449279 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 05:10:35.449294 | orchestrator | Friday 20 February 2026 05:10:33 +0000 (0:00:01.514) 0:14:41.137 ******* 2026-02-20 05:10:35.449311 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 05:10:35.449329 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 05:10:35.449359 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 05:10:55.689274 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:10:55.689395 | orchestrator | 2026-02-20 05:10:55.689483 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 05:10:55.689504 | orchestrator | Friday 20 February 2026 05:10:35 +0000 (0:00:01.779) 0:14:42.917 ******* 2026-02-20 05:10:55.689520 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:10:55.689535 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:10:55.689547 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:10:55.689559 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:10:55.689572 | orchestrator | 2026-02-20 05:10:55.689584 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 05:10:55.689595 | orchestrator | Friday 20 February 2026 05:10:36 +0000 (0:00:01.103) 0:14:44.021 ******* 2026-02-20 05:10:55.689627 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'fa7fdf54d9d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 05:10:29.671023', 'end': '2026-02-20 05:10:29.743100', 'delta': '0:00:00.072077', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fa7fdf54d9d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 05:10:55.689666 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '9edb2d04dfb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 05:10:30.383506', 'end': '2026-02-20 05:10:30.436355', 'delta': '0:00:00.052849', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9edb2d04dfb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 05:10:55.689678 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '28a82f95a8fd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 05:10:30.968353', 'end': '2026-02-20 05:10:31.027030', 'delta': '0:00:00.058677', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['28a82f95a8fd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 05:10:55.689688 | orchestrator | 2026-02-20 05:10:55.689700 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 05:10:55.689729 | orchestrator | Friday 20 February 2026 05:10:37 +0000 (0:00:01.155) 0:14:45.177 ******* 2026-02-20 05:10:55.689740 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:10:55.689752 | orchestrator | 2026-02-20 05:10:55.689762 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 05:10:55.689772 | orchestrator | Friday 20 February 2026 05:10:38 +0000 (0:00:01.212) 0:14:46.389 ******* 2026-02-20 05:10:55.689783 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:10:55.689793 | orchestrator | 2026-02-20 05:10:55.689804 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 05:10:55.689817 | orchestrator | Friday 20 February 2026 05:10:40 +0000 (0:00:01.158) 0:14:47.547 ******* 2026-02-20 05:10:55.689830 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:10:55.689843 | orchestrator | 2026-02-20 05:10:55.689855 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 05:10:55.689868 | orchestrator | Friday 20 February 2026 05:10:41 +0000 (0:00:01.090) 0:14:48.638 ******* 2026-02-20 05:10:55.689880 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:10:55.689892 | orchestrator | 2026-02-20 05:10:55.689905 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:10:55.689918 | orchestrator | Friday 20 February 2026 05:10:43 +0000 (0:00:01.919) 0:14:50.557 ******* 2026-02-20 05:10:55.689931 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:10:55.689943 | orchestrator | 2026-02-20 05:10:55.689955 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 05:10:55.689968 | orchestrator | Friday 20 February 2026 05:10:44 +0000 (0:00:01.155) 0:14:51.713 ******* 2026-02-20 05:10:55.689981 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:10:55.689993 | orchestrator | 2026-02-20 05:10:55.690074 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 05:10:55.690088 | orchestrator | Friday 20 February 2026 05:10:45 +0000 (0:00:01.090) 0:14:52.803 ******* 2026-02-20 05:10:55.690101 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:10:55.690114 | orchestrator | 2026-02-20 05:10:55.690127 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:10:55.690138 | orchestrator | Friday 20 February 2026 05:10:46 +0000 (0:00:01.156) 0:14:53.960 ******* 2026-02-20 05:10:55.690152 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:10:55.690165 | orchestrator | 2026-02-20 05:10:55.690178 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 05:10:55.690190 | orchestrator | Friday 20 February 2026 05:10:47 +0000 (0:00:01.086) 0:14:55.047 ******* 2026-02-20 05:10:55.690201 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:10:55.690212 | orchestrator | 2026-02-20 05:10:55.690223 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 05:10:55.690245 | orchestrator | Friday 20 February 2026 05:10:48 +0000 (0:00:01.228) 0:14:56.276 ******* 2026-02-20 05:10:55.690258 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:10:55.690271 | orchestrator | 2026-02-20 05:10:55.690282 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 05:10:55.690295 | orchestrator | Friday 20 February 2026 05:10:49 +0000 (0:00:01.104) 0:14:57.381 ******* 2026-02-20 05:10:55.690308 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:10:55.690321 | orchestrator | 2026-02-20 05:10:55.690333 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 05:10:55.690346 | orchestrator | Friday 20 February 2026 05:10:51 +0000 (0:00:01.168) 0:14:58.550 ******* 2026-02-20 05:10:55.690359 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:10:55.690373 | orchestrator | 2026-02-20 05:10:55.690386 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 05:10:55.690400 | orchestrator | Friday 20 February 2026 05:10:52 +0000 (0:00:01.129) 0:14:59.680 ******* 2026-02-20 05:10:55.690413 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:10:55.690455 | orchestrator | 2026-02-20 05:10:55.690467 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 05:10:55.690479 | orchestrator | Friday 20 February 2026 05:10:53 +0000 (0:00:01.114) 0:15:00.794 ******* 2026-02-20 05:10:55.690490 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:10:55.690503 | orchestrator | 2026-02-20 05:10:55.690514 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 05:10:55.690525 | orchestrator | Friday 20 February 2026 05:10:54 +0000 (0:00:01.111) 0:15:01.906 ******* 2026-02-20 05:10:55.690539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:10:55.690552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:10:55.690578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:10:56.937580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 05:10:56.937677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:10:56.937695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:10:56.937725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:10:56.937740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3bf70d99', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part16', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part14', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part15', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part1', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:10:56.937795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:10:56.937808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:10:56.937820 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:10:56.937834 | orchestrator | 2026-02-20 05:10:56.937846 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 05:10:56.937858 | orchestrator | Friday 20 February 2026 05:10:55 +0000 (0:00:01.253) 0:15:03.159 ******* 2026-02-20 05:10:56.937872 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:10:56.937890 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:10:56.937900 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:10:56.937911 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:10:56.937929 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:11:12.936235 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:11:12.936358 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:11:12.936396 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3bf70d99', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part16', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part14', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part15', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part1', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:11:12.936543 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:11:12.936560 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:11:12.936573 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:12.936588 | orchestrator | 2026-02-20 05:11:12.936600 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 05:11:12.936613 | orchestrator | Friday 20 February 2026 05:10:56 +0000 (0:00:01.255) 0:15:04.415 ******* 2026-02-20 05:11:12.936624 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:11:12.936635 | orchestrator | 2026-02-20 05:11:12.936647 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 05:11:12.936658 | orchestrator | Friday 20 February 2026 05:10:58 +0000 (0:00:01.540) 0:15:05.955 ******* 2026-02-20 05:11:12.936668 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:11:12.936679 | orchestrator | 2026-02-20 05:11:12.936777 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:11:12.936790 | orchestrator | Friday 20 February 2026 05:10:59 +0000 (0:00:01.142) 0:15:07.098 ******* 2026-02-20 05:11:12.936802 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:11:12.936815 | orchestrator | 2026-02-20 05:11:12.936828 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:11:12.936840 | orchestrator | Friday 20 February 2026 05:11:01 +0000 (0:00:01.560) 0:15:08.659 ******* 2026-02-20 05:11:12.936853 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:12.936866 | orchestrator | 2026-02-20 05:11:12.936879 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:11:12.936899 | orchestrator | Friday 20 February 2026 05:11:02 +0000 (0:00:01.091) 0:15:09.751 ******* 2026-02-20 05:11:12.936912 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:12.936925 | orchestrator | 2026-02-20 05:11:12.936937 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:11:12.936950 | orchestrator | Friday 20 February 2026 05:11:03 +0000 (0:00:01.247) 0:15:10.999 ******* 2026-02-20 05:11:12.936963 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:12.936982 | orchestrator | 2026-02-20 05:11:12.937001 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 05:11:12.937020 | orchestrator | Friday 20 February 2026 05:11:04 +0000 (0:00:01.193) 0:15:12.193 ******* 2026-02-20 05:11:12.937039 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-20 05:11:12.937058 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-20 05:11:12.937076 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-20 05:11:12.937094 | orchestrator | 2026-02-20 05:11:12.937112 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 05:11:12.937130 | orchestrator | Friday 20 February 2026 05:11:06 +0000 (0:00:02.005) 0:15:14.198 ******* 2026-02-20 05:11:12.937166 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-20 05:11:12.937186 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-20 05:11:12.937203 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-20 05:11:12.937219 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:12.937237 | orchestrator | 2026-02-20 05:11:12.937256 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 05:11:12.937274 | orchestrator | Friday 20 February 2026 05:11:07 +0000 (0:00:01.178) 0:15:15.377 ******* 2026-02-20 05:11:12.937292 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:12.937310 | orchestrator | 2026-02-20 05:11:12.937330 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 05:11:12.937348 | orchestrator | Friday 20 February 2026 05:11:09 +0000 (0:00:01.143) 0:15:16.520 ******* 2026-02-20 05:11:12.937367 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:11:12.937386 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:11:12.937405 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-20 05:11:12.937423 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:11:12.937476 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:11:12.937489 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:11:12.937499 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:11:12.937510 | orchestrator | 2026-02-20 05:11:12.937521 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 05:11:12.937533 | orchestrator | Friday 20 February 2026 05:11:10 +0000 (0:00:01.788) 0:15:18.308 ******* 2026-02-20 05:11:12.937543 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:11:12.937554 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:11:12.937566 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-20 05:11:12.937590 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:11:51.268633 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:11:51.268720 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:11:51.268727 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:11:51.268732 | orchestrator | 2026-02-20 05:11:51.268737 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-20 05:11:51.268743 | orchestrator | Friday 20 February 2026 05:11:12 +0000 (0:00:02.099) 0:15:20.408 ******* 2026-02-20 05:11:51.268747 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.268753 | orchestrator | 2026-02-20 05:11:51.268757 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-20 05:11:51.268761 | orchestrator | Friday 20 February 2026 05:11:13 +0000 (0:00:00.852) 0:15:21.261 ******* 2026-02-20 05:11:51.268765 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.268769 | orchestrator | 2026-02-20 05:11:51.268773 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-20 05:11:51.268776 | orchestrator | Friday 20 February 2026 05:11:14 +0000 (0:00:00.850) 0:15:22.111 ******* 2026-02-20 05:11:51.268780 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.268784 | orchestrator | 2026-02-20 05:11:51.268788 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-20 05:11:51.268792 | orchestrator | Friday 20 February 2026 05:11:15 +0000 (0:00:00.762) 0:15:22.873 ******* 2026-02-20 05:11:51.268796 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.268816 | orchestrator | 2026-02-20 05:11:51.268820 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-20 05:11:51.268824 | orchestrator | Friday 20 February 2026 05:11:16 +0000 (0:00:00.844) 0:15:23.718 ******* 2026-02-20 05:11:51.268828 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.268832 | orchestrator | 2026-02-20 05:11:51.268835 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-20 05:11:51.268839 | orchestrator | Friday 20 February 2026 05:11:16 +0000 (0:00:00.753) 0:15:24.471 ******* 2026-02-20 05:11:51.268843 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-20 05:11:51.268847 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-20 05:11:51.268851 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-20 05:11:51.268855 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.268868 | orchestrator | 2026-02-20 05:11:51.268872 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-20 05:11:51.268876 | orchestrator | Friday 20 February 2026 05:11:18 +0000 (0:00:01.018) 0:15:25.489 ******* 2026-02-20 05:11:51.268881 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-20 05:11:51.268887 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-20 05:11:51.268893 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-20 05:11:51.268902 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-20 05:11:51.268909 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-20 05:11:51.268915 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-20 05:11:51.268921 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.268927 | orchestrator | 2026-02-20 05:11:51.268933 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-20 05:11:51.268938 | orchestrator | Friday 20 February 2026 05:11:19 +0000 (0:00:01.581) 0:15:27.071 ******* 2026-02-20 05:11:51.268945 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-02-20 05:11:51.268952 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-20 05:11:51.268957 | orchestrator | 2026-02-20 05:11:51.268963 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-20 05:11:51.268969 | orchestrator | Friday 20 February 2026 05:11:22 +0000 (0:00:03.393) 0:15:30.464 ******* 2026-02-20 05:11:51.268975 | orchestrator | changed: [testbed-node-2] 2026-02-20 05:11:51.268982 | orchestrator | 2026-02-20 05:11:51.268987 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 05:11:51.268993 | orchestrator | Friday 20 February 2026 05:11:25 +0000 (0:00:02.255) 0:15:32.720 ******* 2026-02-20 05:11:51.269000 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-20 05:11:51.269007 | orchestrator | 2026-02-20 05:11:51.269013 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 05:11:51.269018 | orchestrator | Friday 20 February 2026 05:11:26 +0000 (0:00:01.183) 0:15:33.904 ******* 2026-02-20 05:11:51.269024 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-20 05:11:51.269030 | orchestrator | 2026-02-20 05:11:51.269036 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 05:11:51.269042 | orchestrator | Friday 20 February 2026 05:11:27 +0000 (0:00:01.092) 0:15:34.997 ******* 2026-02-20 05:11:51.269048 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:11:51.269054 | orchestrator | 2026-02-20 05:11:51.269060 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 05:11:51.269067 | orchestrator | Friday 20 February 2026 05:11:29 +0000 (0:00:01.533) 0:15:36.530 ******* 2026-02-20 05:11:51.269073 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.269080 | orchestrator | 2026-02-20 05:11:51.269094 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 05:11:51.269101 | orchestrator | Friday 20 February 2026 05:11:30 +0000 (0:00:01.127) 0:15:37.658 ******* 2026-02-20 05:11:51.269107 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.269113 | orchestrator | 2026-02-20 05:11:51.269120 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 05:11:51.269142 | orchestrator | Friday 20 February 2026 05:11:31 +0000 (0:00:01.157) 0:15:38.815 ******* 2026-02-20 05:11:51.269149 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.269156 | orchestrator | 2026-02-20 05:11:51.269163 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 05:11:51.269167 | orchestrator | Friday 20 February 2026 05:11:32 +0000 (0:00:01.105) 0:15:39.921 ******* 2026-02-20 05:11:51.269171 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:11:51.269175 | orchestrator | 2026-02-20 05:11:51.269179 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 05:11:51.269185 | orchestrator | Friday 20 February 2026 05:11:34 +0000 (0:00:01.629) 0:15:41.551 ******* 2026-02-20 05:11:51.269192 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.269198 | orchestrator | 2026-02-20 05:11:51.269205 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 05:11:51.269211 | orchestrator | Friday 20 February 2026 05:11:35 +0000 (0:00:01.103) 0:15:42.654 ******* 2026-02-20 05:11:51.269217 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.269224 | orchestrator | 2026-02-20 05:11:51.269231 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 05:11:51.269238 | orchestrator | Friday 20 February 2026 05:11:36 +0000 (0:00:01.115) 0:15:43.770 ******* 2026-02-20 05:11:51.269245 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:11:51.269251 | orchestrator | 2026-02-20 05:11:51.269258 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 05:11:51.269265 | orchestrator | Friday 20 February 2026 05:11:37 +0000 (0:00:01.632) 0:15:45.403 ******* 2026-02-20 05:11:51.269271 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:11:51.269279 | orchestrator | 2026-02-20 05:11:51.269285 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 05:11:51.269292 | orchestrator | Friday 20 February 2026 05:11:39 +0000 (0:00:01.576) 0:15:46.979 ******* 2026-02-20 05:11:51.269298 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.269305 | orchestrator | 2026-02-20 05:11:51.269311 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 05:11:51.269318 | orchestrator | Friday 20 February 2026 05:11:40 +0000 (0:00:00.780) 0:15:47.759 ******* 2026-02-20 05:11:51.269324 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:11:51.269330 | orchestrator | 2026-02-20 05:11:51.269337 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 05:11:51.269343 | orchestrator | Friday 20 February 2026 05:11:41 +0000 (0:00:00.808) 0:15:48.568 ******* 2026-02-20 05:11:51.269355 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.269362 | orchestrator | 2026-02-20 05:11:51.269368 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 05:11:51.269374 | orchestrator | Friday 20 February 2026 05:11:41 +0000 (0:00:00.801) 0:15:49.370 ******* 2026-02-20 05:11:51.269380 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.269386 | orchestrator | 2026-02-20 05:11:51.269393 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 05:11:51.269400 | orchestrator | Friday 20 February 2026 05:11:42 +0000 (0:00:00.761) 0:15:50.132 ******* 2026-02-20 05:11:51.269405 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.269412 | orchestrator | 2026-02-20 05:11:51.269418 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 05:11:51.269424 | orchestrator | Friday 20 February 2026 05:11:43 +0000 (0:00:00.791) 0:15:50.924 ******* 2026-02-20 05:11:51.269431 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.269437 | orchestrator | 2026-02-20 05:11:51.269449 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 05:11:51.269456 | orchestrator | Friday 20 February 2026 05:11:44 +0000 (0:00:00.795) 0:15:51.719 ******* 2026-02-20 05:11:51.269500 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.269507 | orchestrator | 2026-02-20 05:11:51.269514 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 05:11:51.269520 | orchestrator | Friday 20 February 2026 05:11:45 +0000 (0:00:00.824) 0:15:52.544 ******* 2026-02-20 05:11:51.269526 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:11:51.269533 | orchestrator | 2026-02-20 05:11:51.269539 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 05:11:51.269545 | orchestrator | Friday 20 February 2026 05:11:45 +0000 (0:00:00.786) 0:15:53.330 ******* 2026-02-20 05:11:51.269551 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:11:51.269558 | orchestrator | 2026-02-20 05:11:51.269564 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 05:11:51.269570 | orchestrator | Friday 20 February 2026 05:11:46 +0000 (0:00:00.776) 0:15:54.106 ******* 2026-02-20 05:11:51.269576 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:11:51.269582 | orchestrator | 2026-02-20 05:11:51.269588 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 05:11:51.269595 | orchestrator | Friday 20 February 2026 05:11:47 +0000 (0:00:00.786) 0:15:54.893 ******* 2026-02-20 05:11:51.269602 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.269608 | orchestrator | 2026-02-20 05:11:51.269614 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 05:11:51.269620 | orchestrator | Friday 20 February 2026 05:11:48 +0000 (0:00:00.785) 0:15:55.678 ******* 2026-02-20 05:11:51.269627 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.269633 | orchestrator | 2026-02-20 05:11:51.269639 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 05:11:51.269646 | orchestrator | Friday 20 February 2026 05:11:48 +0000 (0:00:00.740) 0:15:56.419 ******* 2026-02-20 05:11:51.269652 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.269658 | orchestrator | 2026-02-20 05:11:51.269664 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 05:11:51.269670 | orchestrator | Friday 20 February 2026 05:11:49 +0000 (0:00:00.757) 0:15:57.177 ******* 2026-02-20 05:11:51.269676 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.269683 | orchestrator | 2026-02-20 05:11:51.269689 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 05:11:51.269696 | orchestrator | Friday 20 February 2026 05:11:50 +0000 (0:00:00.793) 0:15:57.970 ******* 2026-02-20 05:11:51.269702 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:11:51.269709 | orchestrator | 2026-02-20 05:11:51.269720 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 05:12:34.563574 | orchestrator | Friday 20 February 2026 05:11:51 +0000 (0:00:00.765) 0:15:58.736 ******* 2026-02-20 05:12:34.563741 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.563770 | orchestrator | 2026-02-20 05:12:34.563791 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 05:12:34.563813 | orchestrator | Friday 20 February 2026 05:11:52 +0000 (0:00:00.752) 0:15:59.489 ******* 2026-02-20 05:12:34.563830 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.563849 | orchestrator | 2026-02-20 05:12:34.563870 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 05:12:34.563889 | orchestrator | Friday 20 February 2026 05:11:52 +0000 (0:00:00.777) 0:16:00.266 ******* 2026-02-20 05:12:34.563907 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.563923 | orchestrator | 2026-02-20 05:12:34.563939 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 05:12:34.563957 | orchestrator | Friday 20 February 2026 05:11:53 +0000 (0:00:00.779) 0:16:01.045 ******* 2026-02-20 05:12:34.563974 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.564025 | orchestrator | 2026-02-20 05:12:34.564043 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 05:12:34.564058 | orchestrator | Friday 20 February 2026 05:11:54 +0000 (0:00:00.743) 0:16:01.789 ******* 2026-02-20 05:12:34.564069 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.564083 | orchestrator | 2026-02-20 05:12:34.564100 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 05:12:34.564117 | orchestrator | Friday 20 February 2026 05:11:55 +0000 (0:00:00.783) 0:16:02.572 ******* 2026-02-20 05:12:34.564133 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.564150 | orchestrator | 2026-02-20 05:12:34.564168 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 05:12:34.564185 | orchestrator | Friday 20 February 2026 05:11:55 +0000 (0:00:00.760) 0:16:03.333 ******* 2026-02-20 05:12:34.564203 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.564220 | orchestrator | 2026-02-20 05:12:34.564237 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 05:12:34.564253 | orchestrator | Friday 20 February 2026 05:11:56 +0000 (0:00:00.771) 0:16:04.104 ******* 2026-02-20 05:12:34.564270 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:12:34.564288 | orchestrator | 2026-02-20 05:12:34.564321 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 05:12:34.564339 | orchestrator | Friday 20 February 2026 05:11:58 +0000 (0:00:01.633) 0:16:05.738 ******* 2026-02-20 05:12:34.564354 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:12:34.564370 | orchestrator | 2026-02-20 05:12:34.564386 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 05:12:34.564402 | orchestrator | Friday 20 February 2026 05:12:00 +0000 (0:00:02.308) 0:16:08.046 ******* 2026-02-20 05:12:34.564419 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-20 05:12:34.564437 | orchestrator | 2026-02-20 05:12:34.564454 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-20 05:12:34.564470 | orchestrator | Friday 20 February 2026 05:12:01 +0000 (0:00:01.145) 0:16:09.192 ******* 2026-02-20 05:12:34.564485 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.564519 | orchestrator | 2026-02-20 05:12:34.564538 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-20 05:12:34.564549 | orchestrator | Friday 20 February 2026 05:12:02 +0000 (0:00:01.115) 0:16:10.308 ******* 2026-02-20 05:12:34.564558 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.564568 | orchestrator | 2026-02-20 05:12:34.564578 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-20 05:12:34.564588 | orchestrator | Friday 20 February 2026 05:12:03 +0000 (0:00:01.130) 0:16:11.438 ******* 2026-02-20 05:12:34.564604 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 05:12:34.564620 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 05:12:34.564636 | orchestrator | 2026-02-20 05:12:34.564652 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-20 05:12:34.564667 | orchestrator | Friday 20 February 2026 05:12:05 +0000 (0:00:01.968) 0:16:13.406 ******* 2026-02-20 05:12:34.564683 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:12:34.564699 | orchestrator | 2026-02-20 05:12:34.564716 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-20 05:12:34.564731 | orchestrator | Friday 20 February 2026 05:12:07 +0000 (0:00:01.498) 0:16:14.906 ******* 2026-02-20 05:12:34.564746 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.564761 | orchestrator | 2026-02-20 05:12:34.564776 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-20 05:12:34.564793 | orchestrator | Friday 20 February 2026 05:12:08 +0000 (0:00:01.129) 0:16:16.035 ******* 2026-02-20 05:12:34.564809 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.564825 | orchestrator | 2026-02-20 05:12:34.564840 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 05:12:34.564871 | orchestrator | Friday 20 February 2026 05:12:09 +0000 (0:00:00.762) 0:16:16.797 ******* 2026-02-20 05:12:34.564889 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.564905 | orchestrator | 2026-02-20 05:12:34.564921 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 05:12:34.564937 | orchestrator | Friday 20 February 2026 05:12:10 +0000 (0:00:00.753) 0:16:17.551 ******* 2026-02-20 05:12:34.564954 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-20 05:12:34.564970 | orchestrator | 2026-02-20 05:12:34.564986 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-20 05:12:34.565002 | orchestrator | Friday 20 February 2026 05:12:11 +0000 (0:00:01.107) 0:16:18.659 ******* 2026-02-20 05:12:34.565019 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:12:34.565036 | orchestrator | 2026-02-20 05:12:34.565053 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-20 05:12:34.565096 | orchestrator | Friday 20 February 2026 05:12:13 +0000 (0:00:01.949) 0:16:20.608 ******* 2026-02-20 05:12:34.565108 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 05:12:34.565118 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 05:12:34.565127 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 05:12:34.565137 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.565147 | orchestrator | 2026-02-20 05:12:34.565156 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-20 05:12:34.565166 | orchestrator | Friday 20 February 2026 05:12:14 +0000 (0:00:01.114) 0:16:21.723 ******* 2026-02-20 05:12:34.565176 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.565185 | orchestrator | 2026-02-20 05:12:34.565195 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-20 05:12:34.565205 | orchestrator | Friday 20 February 2026 05:12:15 +0000 (0:00:01.146) 0:16:22.869 ******* 2026-02-20 05:12:34.565214 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.565224 | orchestrator | 2026-02-20 05:12:34.565234 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-20 05:12:34.565243 | orchestrator | Friday 20 February 2026 05:12:16 +0000 (0:00:01.125) 0:16:23.995 ******* 2026-02-20 05:12:34.565253 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.565263 | orchestrator | 2026-02-20 05:12:34.565273 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-20 05:12:34.565282 | orchestrator | Friday 20 February 2026 05:12:17 +0000 (0:00:01.154) 0:16:25.150 ******* 2026-02-20 05:12:34.565292 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.565302 | orchestrator | 2026-02-20 05:12:34.565311 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-20 05:12:34.565321 | orchestrator | Friday 20 February 2026 05:12:18 +0000 (0:00:01.114) 0:16:26.265 ******* 2026-02-20 05:12:34.565330 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.565340 | orchestrator | 2026-02-20 05:12:34.565350 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 05:12:34.565359 | orchestrator | Friday 20 February 2026 05:12:19 +0000 (0:00:00.791) 0:16:27.057 ******* 2026-02-20 05:12:34.565369 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:12:34.565379 | orchestrator | 2026-02-20 05:12:34.565396 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 05:12:34.565406 | orchestrator | Friday 20 February 2026 05:12:21 +0000 (0:00:02.423) 0:16:29.481 ******* 2026-02-20 05:12:34.565416 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:12:34.565426 | orchestrator | 2026-02-20 05:12:34.565436 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 05:12:34.565445 | orchestrator | Friday 20 February 2026 05:12:22 +0000 (0:00:00.778) 0:16:30.260 ******* 2026-02-20 05:12:34.565455 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-20 05:12:34.565474 | orchestrator | 2026-02-20 05:12:34.565485 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-20 05:12:34.565551 | orchestrator | Friday 20 February 2026 05:12:23 +0000 (0:00:01.104) 0:16:31.365 ******* 2026-02-20 05:12:34.565568 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.565585 | orchestrator | 2026-02-20 05:12:34.565600 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-20 05:12:34.565616 | orchestrator | Friday 20 February 2026 05:12:25 +0000 (0:00:01.121) 0:16:32.486 ******* 2026-02-20 05:12:34.565632 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.565647 | orchestrator | 2026-02-20 05:12:34.565661 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-20 05:12:34.565676 | orchestrator | Friday 20 February 2026 05:12:26 +0000 (0:00:01.052) 0:16:33.538 ******* 2026-02-20 05:12:34.565692 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.565706 | orchestrator | 2026-02-20 05:12:34.565720 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-20 05:12:34.565735 | orchestrator | Friday 20 February 2026 05:12:27 +0000 (0:00:01.090) 0:16:34.628 ******* 2026-02-20 05:12:34.565749 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.565763 | orchestrator | 2026-02-20 05:12:34.565778 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-20 05:12:34.565793 | orchestrator | Friday 20 February 2026 05:12:28 +0000 (0:00:01.103) 0:16:35.732 ******* 2026-02-20 05:12:34.565810 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.565826 | orchestrator | 2026-02-20 05:12:34.565842 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-20 05:12:34.565857 | orchestrator | Friday 20 February 2026 05:12:29 +0000 (0:00:01.110) 0:16:36.842 ******* 2026-02-20 05:12:34.565872 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.565888 | orchestrator | 2026-02-20 05:12:34.565905 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-20 05:12:34.565920 | orchestrator | Friday 20 February 2026 05:12:30 +0000 (0:00:01.083) 0:16:37.925 ******* 2026-02-20 05:12:34.565936 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.565951 | orchestrator | 2026-02-20 05:12:34.565968 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-20 05:12:34.565984 | orchestrator | Friday 20 February 2026 05:12:31 +0000 (0:00:01.108) 0:16:39.034 ******* 2026-02-20 05:12:34.565999 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:12:34.566095 | orchestrator | 2026-02-20 05:12:34.566123 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-20 05:12:34.566141 | orchestrator | Friday 20 February 2026 05:12:32 +0000 (0:00:01.098) 0:16:40.133 ******* 2026-02-20 05:12:34.566157 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:12:34.566173 | orchestrator | 2026-02-20 05:12:34.566190 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 05:12:34.566254 | orchestrator | Friday 20 February 2026 05:12:33 +0000 (0:00:00.765) 0:16:40.898 ******* 2026-02-20 05:12:34.566271 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-20 05:12:34.566289 | orchestrator | 2026-02-20 05:12:34.566326 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-20 05:13:10.963143 | orchestrator | Friday 20 February 2026 05:12:34 +0000 (0:00:01.135) 0:16:42.034 ******* 2026-02-20 05:13:10.963223 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-20 05:13:10.963230 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-20 05:13:10.963235 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-20 05:13:10.963239 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-20 05:13:10.963244 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-20 05:13:10.963248 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-20 05:13:10.963252 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-20 05:13:10.963273 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-20 05:13:10.963278 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 05:13:10.963282 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 05:13:10.963286 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 05:13:10.963293 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 05:13:10.963299 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 05:13:10.963306 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 05:13:10.963312 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-20 05:13:10.963318 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-20 05:13:10.963324 | orchestrator | 2026-02-20 05:13:10.963331 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 05:13:10.963338 | orchestrator | Friday 20 February 2026 05:12:41 +0000 (0:00:06.908) 0:16:48.943 ******* 2026-02-20 05:13:10.963344 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963351 | orchestrator | 2026-02-20 05:13:10.963357 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 05:13:10.963364 | orchestrator | Friday 20 February 2026 05:12:42 +0000 (0:00:00.763) 0:16:49.706 ******* 2026-02-20 05:13:10.963371 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963377 | orchestrator | 2026-02-20 05:13:10.963398 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 05:13:10.963405 | orchestrator | Friday 20 February 2026 05:12:42 +0000 (0:00:00.757) 0:16:50.464 ******* 2026-02-20 05:13:10.963412 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963418 | orchestrator | 2026-02-20 05:13:10.963424 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 05:13:10.963429 | orchestrator | Friday 20 February 2026 05:12:43 +0000 (0:00:00.764) 0:16:51.228 ******* 2026-02-20 05:13:10.963435 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963442 | orchestrator | 2026-02-20 05:13:10.963449 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 05:13:10.963455 | orchestrator | Friday 20 February 2026 05:12:44 +0000 (0:00:00.754) 0:16:51.983 ******* 2026-02-20 05:13:10.963461 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963468 | orchestrator | 2026-02-20 05:13:10.963475 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 05:13:10.963482 | orchestrator | Friday 20 February 2026 05:12:45 +0000 (0:00:00.756) 0:16:52.739 ******* 2026-02-20 05:13:10.963487 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963490 | orchestrator | 2026-02-20 05:13:10.963494 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 05:13:10.963499 | orchestrator | Friday 20 February 2026 05:12:46 +0000 (0:00:00.766) 0:16:53.506 ******* 2026-02-20 05:13:10.963503 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963507 | orchestrator | 2026-02-20 05:13:10.963511 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 05:13:10.963515 | orchestrator | Friday 20 February 2026 05:12:46 +0000 (0:00:00.762) 0:16:54.269 ******* 2026-02-20 05:13:10.963519 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963522 | orchestrator | 2026-02-20 05:13:10.963561 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 05:13:10.963565 | orchestrator | Friday 20 February 2026 05:12:47 +0000 (0:00:00.761) 0:16:55.031 ******* 2026-02-20 05:13:10.963569 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963573 | orchestrator | 2026-02-20 05:13:10.963577 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 05:13:10.963581 | orchestrator | Friday 20 February 2026 05:12:48 +0000 (0:00:00.767) 0:16:55.798 ******* 2026-02-20 05:13:10.963590 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963594 | orchestrator | 2026-02-20 05:13:10.963598 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 05:13:10.963602 | orchestrator | Friday 20 February 2026 05:12:49 +0000 (0:00:00.769) 0:16:56.568 ******* 2026-02-20 05:13:10.963606 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963610 | orchestrator | 2026-02-20 05:13:10.963613 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 05:13:10.963617 | orchestrator | Friday 20 February 2026 05:12:49 +0000 (0:00:00.759) 0:16:57.327 ******* 2026-02-20 05:13:10.963621 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963625 | orchestrator | 2026-02-20 05:13:10.963629 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 05:13:10.963632 | orchestrator | Friday 20 February 2026 05:12:50 +0000 (0:00:00.846) 0:16:58.174 ******* 2026-02-20 05:13:10.963636 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963640 | orchestrator | 2026-02-20 05:13:10.963644 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 05:13:10.963648 | orchestrator | Friday 20 February 2026 05:12:51 +0000 (0:00:00.835) 0:16:59.010 ******* 2026-02-20 05:13:10.963652 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963655 | orchestrator | 2026-02-20 05:13:10.963659 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 05:13:10.963675 | orchestrator | Friday 20 February 2026 05:12:52 +0000 (0:00:00.785) 0:16:59.796 ******* 2026-02-20 05:13:10.963679 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963683 | orchestrator | 2026-02-20 05:13:10.963687 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 05:13:10.963691 | orchestrator | Friday 20 February 2026 05:12:53 +0000 (0:00:00.864) 0:17:00.660 ******* 2026-02-20 05:13:10.963695 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963699 | orchestrator | 2026-02-20 05:13:10.963702 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 05:13:10.963706 | orchestrator | Friday 20 February 2026 05:12:53 +0000 (0:00:00.760) 0:17:01.421 ******* 2026-02-20 05:13:10.963710 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963714 | orchestrator | 2026-02-20 05:13:10.963719 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:13:10.963724 | orchestrator | Friday 20 February 2026 05:12:54 +0000 (0:00:00.782) 0:17:02.203 ******* 2026-02-20 05:13:10.963729 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963733 | orchestrator | 2026-02-20 05:13:10.963738 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:13:10.963742 | orchestrator | Friday 20 February 2026 05:12:55 +0000 (0:00:00.834) 0:17:03.038 ******* 2026-02-20 05:13:10.963747 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963751 | orchestrator | 2026-02-20 05:13:10.963756 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:13:10.963760 | orchestrator | Friday 20 February 2026 05:12:56 +0000 (0:00:00.772) 0:17:03.810 ******* 2026-02-20 05:13:10.963764 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963769 | orchestrator | 2026-02-20 05:13:10.963773 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:13:10.963779 | orchestrator | Friday 20 February 2026 05:12:57 +0000 (0:00:00.769) 0:17:04.580 ******* 2026-02-20 05:13:10.963785 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963792 | orchestrator | 2026-02-20 05:13:10.963798 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:13:10.963809 | orchestrator | Friday 20 February 2026 05:12:57 +0000 (0:00:00.802) 0:17:05.382 ******* 2026-02-20 05:13:10.963816 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-20 05:13:10.963823 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-20 05:13:10.963830 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-20 05:13:10.963842 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963849 | orchestrator | 2026-02-20 05:13:10.963856 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:13:10.963864 | orchestrator | Friday 20 February 2026 05:12:58 +0000 (0:00:01.095) 0:17:06.478 ******* 2026-02-20 05:13:10.963869 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-20 05:13:10.963873 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-20 05:13:10.963878 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-20 05:13:10.963882 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963887 | orchestrator | 2026-02-20 05:13:10.963891 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:13:10.963896 | orchestrator | Friday 20 February 2026 05:13:00 +0000 (0:00:01.097) 0:17:07.576 ******* 2026-02-20 05:13:10.963900 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-20 05:13:10.963904 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-20 05:13:10.963909 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-20 05:13:10.963913 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963918 | orchestrator | 2026-02-20 05:13:10.963922 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:13:10.963927 | orchestrator | Friday 20 February 2026 05:13:01 +0000 (0:00:01.036) 0:17:08.612 ******* 2026-02-20 05:13:10.963931 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963935 | orchestrator | 2026-02-20 05:13:10.963940 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:13:10.963944 | orchestrator | Friday 20 February 2026 05:13:01 +0000 (0:00:00.753) 0:17:09.366 ******* 2026-02-20 05:13:10.963949 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-20 05:13:10.963953 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.963958 | orchestrator | 2026-02-20 05:13:10.963962 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 05:13:10.963967 | orchestrator | Friday 20 February 2026 05:13:02 +0000 (0:00:00.888) 0:17:10.255 ******* 2026-02-20 05:13:10.963971 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:13:10.963976 | orchestrator | 2026-02-20 05:13:10.963980 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-20 05:13:10.963985 | orchestrator | Friday 20 February 2026 05:13:04 +0000 (0:00:01.450) 0:17:11.705 ******* 2026-02-20 05:13:10.963989 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:13:10.963994 | orchestrator | 2026-02-20 05:13:10.963998 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-20 05:13:10.964003 | orchestrator | Friday 20 February 2026 05:13:05 +0000 (0:00:00.855) 0:17:12.561 ******* 2026-02-20 05:13:10.964007 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-02-20 05:13:10.964012 | orchestrator | 2026-02-20 05:13:10.964017 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-20 05:13:10.964021 | orchestrator | Friday 20 February 2026 05:13:06 +0000 (0:00:01.135) 0:17:13.696 ******* 2026-02-20 05:13:10.964025 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:13:10.964030 | orchestrator | 2026-02-20 05:13:10.964034 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-20 05:13:10.964038 | orchestrator | Friday 20 February 2026 05:13:09 +0000 (0:00:03.571) 0:17:17.268 ******* 2026-02-20 05:13:10.964043 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:13:10.964047 | orchestrator | 2026-02-20 05:13:10.964055 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-20 05:14:27.994648 | orchestrator | Friday 20 February 2026 05:13:10 +0000 (0:00:01.164) 0:17:18.433 ******* 2026-02-20 05:14:27.994786 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:14:27.994812 | orchestrator | 2026-02-20 05:14:27.994829 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-20 05:14:27.994876 | orchestrator | Friday 20 February 2026 05:13:12 +0000 (0:00:01.139) 0:17:19.572 ******* 2026-02-20 05:14:27.994913 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:14:27.994932 | orchestrator | 2026-02-20 05:14:27.994941 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-20 05:14:27.994950 | orchestrator | Friday 20 February 2026 05:13:13 +0000 (0:00:01.140) 0:17:20.714 ******* 2026-02-20 05:14:27.994959 | orchestrator | changed: [testbed-node-2] 2026-02-20 05:14:27.994969 | orchestrator | 2026-02-20 05:14:27.994978 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-20 05:14:27.994987 | orchestrator | Friday 20 February 2026 05:13:15 +0000 (0:00:02.078) 0:17:22.792 ******* 2026-02-20 05:14:27.994995 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:14:27.995004 | orchestrator | 2026-02-20 05:14:27.995013 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-20 05:14:27.995021 | orchestrator | Friday 20 February 2026 05:13:16 +0000 (0:00:01.616) 0:17:24.408 ******* 2026-02-20 05:14:27.995030 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:14:27.995038 | orchestrator | 2026-02-20 05:14:27.995047 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-20 05:14:27.995056 | orchestrator | Friday 20 February 2026 05:13:18 +0000 (0:00:01.493) 0:17:25.902 ******* 2026-02-20 05:14:27.995064 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:14:27.995073 | orchestrator | 2026-02-20 05:14:27.995081 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-20 05:14:27.995090 | orchestrator | Friday 20 February 2026 05:13:19 +0000 (0:00:01.516) 0:17:27.419 ******* 2026-02-20 05:14:27.995098 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:14:27.995109 | orchestrator | 2026-02-20 05:14:27.995119 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-20 05:14:27.995129 | orchestrator | Friday 20 February 2026 05:13:21 +0000 (0:00:01.631) 0:17:29.050 ******* 2026-02-20 05:14:27.995154 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:14:27.995164 | orchestrator | 2026-02-20 05:14:27.995174 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-20 05:14:27.995184 | orchestrator | Friday 20 February 2026 05:13:23 +0000 (0:00:01.538) 0:17:30.589 ******* 2026-02-20 05:14:27.995195 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 05:14:27.995204 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-20 05:14:27.995214 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-20 05:14:27.995224 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-20 05:14:27.995234 | orchestrator | 2026-02-20 05:14:27.995244 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-20 05:14:27.995255 | orchestrator | Friday 20 February 2026 05:13:26 +0000 (0:00:03.848) 0:17:34.437 ******* 2026-02-20 05:14:27.995265 | orchestrator | changed: [testbed-node-2] 2026-02-20 05:14:27.995275 | orchestrator | 2026-02-20 05:14:27.995285 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-20 05:14:27.995295 | orchestrator | Friday 20 February 2026 05:13:29 +0000 (0:00:02.118) 0:17:36.555 ******* 2026-02-20 05:14:27.995305 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:14:27.995315 | orchestrator | 2026-02-20 05:14:27.995325 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-20 05:14:27.995335 | orchestrator | Friday 20 February 2026 05:13:30 +0000 (0:00:01.115) 0:17:37.671 ******* 2026-02-20 05:14:27.995345 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:14:27.995356 | orchestrator | 2026-02-20 05:14:27.995365 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-20 05:14:27.995373 | orchestrator | Friday 20 February 2026 05:13:31 +0000 (0:00:01.126) 0:17:38.797 ******* 2026-02-20 05:14:27.995382 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:14:27.995391 | orchestrator | 2026-02-20 05:14:27.995399 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-20 05:14:27.995415 | orchestrator | Friday 20 February 2026 05:13:33 +0000 (0:00:01.750) 0:17:40.548 ******* 2026-02-20 05:14:27.995424 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:14:27.995433 | orchestrator | 2026-02-20 05:14:27.995441 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-20 05:14:27.995450 | orchestrator | Friday 20 February 2026 05:13:34 +0000 (0:00:01.489) 0:17:42.037 ******* 2026-02-20 05:14:27.995458 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:14:27.995467 | orchestrator | 2026-02-20 05:14:27.995476 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-20 05:14:27.995484 | orchestrator | Friday 20 February 2026 05:13:35 +0000 (0:00:00.765) 0:17:42.803 ******* 2026-02-20 05:14:27.995493 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-02-20 05:14:27.995502 | orchestrator | 2026-02-20 05:14:27.995511 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-20 05:14:27.995520 | orchestrator | Friday 20 February 2026 05:13:36 +0000 (0:00:01.084) 0:17:43.887 ******* 2026-02-20 05:14:27.995528 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:14:27.995537 | orchestrator | 2026-02-20 05:14:27.995546 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-20 05:14:27.995554 | orchestrator | Friday 20 February 2026 05:13:37 +0000 (0:00:01.109) 0:17:44.997 ******* 2026-02-20 05:14:27.995563 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:14:27.995571 | orchestrator | 2026-02-20 05:14:27.995612 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-20 05:14:27.995623 | orchestrator | Friday 20 February 2026 05:13:38 +0000 (0:00:01.102) 0:17:46.099 ******* 2026-02-20 05:14:27.995632 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-02-20 05:14:27.995641 | orchestrator | 2026-02-20 05:14:27.995667 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-20 05:14:27.995677 | orchestrator | Friday 20 February 2026 05:13:39 +0000 (0:00:01.044) 0:17:47.144 ******* 2026-02-20 05:14:27.995685 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:14:27.995694 | orchestrator | 2026-02-20 05:14:27.995703 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-20 05:14:27.995711 | orchestrator | Friday 20 February 2026 05:13:41 +0000 (0:00:02.283) 0:17:49.428 ******* 2026-02-20 05:14:27.995720 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:14:27.995729 | orchestrator | 2026-02-20 05:14:27.995738 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-20 05:14:27.995747 | orchestrator | Friday 20 February 2026 05:13:43 +0000 (0:00:01.994) 0:17:51.422 ******* 2026-02-20 05:14:27.995755 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:14:27.995764 | orchestrator | 2026-02-20 05:14:27.995773 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-20 05:14:27.995781 | orchestrator | Friday 20 February 2026 05:13:46 +0000 (0:00:02.528) 0:17:53.951 ******* 2026-02-20 05:14:27.995790 | orchestrator | changed: [testbed-node-2] 2026-02-20 05:14:27.995798 | orchestrator | 2026-02-20 05:14:27.995807 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-20 05:14:27.995816 | orchestrator | Friday 20 February 2026 05:13:49 +0000 (0:00:03.161) 0:17:57.113 ******* 2026-02-20 05:14:27.995824 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-02-20 05:14:27.995833 | orchestrator | 2026-02-20 05:14:27.995842 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-20 05:14:27.995850 | orchestrator | Friday 20 February 2026 05:13:50 +0000 (0:00:01.123) 0:17:58.236 ******* 2026-02-20 05:14:27.995859 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-20 05:14:27.995867 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:14:27.995876 | orchestrator | 2026-02-20 05:14:27.995885 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-20 05:14:27.995905 | orchestrator | Friday 20 February 2026 05:14:13 +0000 (0:00:23.030) 0:18:21.267 ******* 2026-02-20 05:14:27.995914 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:14:27.995923 | orchestrator | 2026-02-20 05:14:27.995937 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-20 05:14:27.995945 | orchestrator | Friday 20 February 2026 05:14:16 +0000 (0:00:02.850) 0:18:24.117 ******* 2026-02-20 05:14:27.995954 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:14:27.995963 | orchestrator | 2026-02-20 05:14:27.995971 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-20 05:14:27.995980 | orchestrator | Friday 20 February 2026 05:14:17 +0000 (0:00:00.758) 0:18:24.875 ******* 2026-02-20 05:14:27.995991 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-20 05:14:27.996003 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-20 05:14:27.996012 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-20 05:14:27.996021 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-20 05:14:27.996033 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-20 05:14:27.996057 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0c15de8ee6b5d9a3337136109365e953b5e5cc2a'}])  2026-02-20 05:15:13.854076 | orchestrator | 2026-02-20 05:15:13.854198 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-20 05:15:13.854219 | orchestrator | Friday 20 February 2026 05:14:27 +0000 (0:00:10.588) 0:18:35.464 ******* 2026-02-20 05:15:13.854234 | orchestrator | changed: [testbed-node-2] 2026-02-20 05:15:13.854249 | orchestrator | 2026-02-20 05:15:13.854262 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 05:15:13.854275 | orchestrator | Friday 20 February 2026 05:14:30 +0000 (0:00:02.278) 0:18:37.743 ******* 2026-02-20 05:15:13.854289 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:15:13.854304 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-20 05:15:13.854317 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-20 05:15:13.854360 | orchestrator | 2026-02-20 05:15:13.854374 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 05:15:13.854388 | orchestrator | Friday 20 February 2026 05:14:32 +0000 (0:00:01.812) 0:18:39.555 ******* 2026-02-20 05:15:13.854401 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-20 05:15:13.854415 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-20 05:15:13.854428 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-20 05:15:13.854441 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:15:13.854456 | orchestrator | 2026-02-20 05:15:13.854468 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-20 05:15:13.854482 | orchestrator | Friday 20 February 2026 05:14:33 +0000 (0:00:01.017) 0:18:40.573 ******* 2026-02-20 05:15:13.854495 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:15:13.854507 | orchestrator | 2026-02-20 05:15:13.854520 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-20 05:15:13.854533 | orchestrator | Friday 20 February 2026 05:14:33 +0000 (0:00:00.783) 0:18:41.356 ******* 2026-02-20 05:15:13.854562 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:15:13.854577 | orchestrator | 2026-02-20 05:15:13.854591 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-02-20 05:15:13.854605 | orchestrator | 2026-02-20 05:15:13.854647 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-02-20 05:15:13.854661 | orchestrator | Friday 20 February 2026 05:14:37 +0000 (0:00:03.344) 0:18:44.701 ******* 2026-02-20 05:15:13.854675 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:15:13.854690 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:15:13.854704 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:15:13.854718 | orchestrator | 2026-02-20 05:15:13.854732 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-20 05:15:13.854742 | orchestrator | 2026-02-20 05:15:13.854751 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-20 05:15:13.854760 | orchestrator | Friday 20 February 2026 05:14:38 +0000 (0:00:01.560) 0:18:46.261 ******* 2026-02-20 05:15:13.854769 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.854779 | orchestrator | 2026-02-20 05:15:13.854788 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:15:13.854797 | orchestrator | Friday 20 February 2026 05:14:39 +0000 (0:00:01.126) 0:18:47.388 ******* 2026-02-20 05:15:13.854807 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.854816 | orchestrator | 2026-02-20 05:15:13.854825 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 05:15:13.854834 | orchestrator | Friday 20 February 2026 05:14:41 +0000 (0:00:01.180) 0:18:48.569 ******* 2026-02-20 05:15:13.854843 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.854853 | orchestrator | 2026-02-20 05:15:13.854862 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 05:15:13.854871 | orchestrator | Friday 20 February 2026 05:14:42 +0000 (0:00:01.112) 0:18:49.682 ******* 2026-02-20 05:15:13.854881 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.854890 | orchestrator | 2026-02-20 05:15:13.854899 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 05:15:13.854908 | orchestrator | Friday 20 February 2026 05:14:43 +0000 (0:00:01.140) 0:18:50.822 ******* 2026-02-20 05:15:13.854918 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.854928 | orchestrator | 2026-02-20 05:15:13.854937 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 05:15:13.854945 | orchestrator | Friday 20 February 2026 05:14:44 +0000 (0:00:01.113) 0:18:51.936 ******* 2026-02-20 05:15:13.854953 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.854961 | orchestrator | 2026-02-20 05:15:13.854969 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 05:15:13.854977 | orchestrator | Friday 20 February 2026 05:14:45 +0000 (0:00:01.170) 0:18:53.107 ******* 2026-02-20 05:15:13.854997 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855005 | orchestrator | 2026-02-20 05:15:13.855013 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 05:15:13.855021 | orchestrator | Friday 20 February 2026 05:14:46 +0000 (0:00:01.106) 0:18:54.213 ******* 2026-02-20 05:15:13.855029 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855037 | orchestrator | 2026-02-20 05:15:13.855045 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 05:15:13.855053 | orchestrator | Friday 20 February 2026 05:14:47 +0000 (0:00:01.156) 0:18:55.370 ******* 2026-02-20 05:15:13.855060 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855068 | orchestrator | 2026-02-20 05:15:13.855076 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 05:15:13.855084 | orchestrator | Friday 20 February 2026 05:14:48 +0000 (0:00:01.106) 0:18:56.476 ******* 2026-02-20 05:15:13.855092 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855100 | orchestrator | 2026-02-20 05:15:13.855108 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 05:15:13.855115 | orchestrator | Friday 20 February 2026 05:14:50 +0000 (0:00:01.142) 0:18:57.619 ******* 2026-02-20 05:15:13.855123 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855131 | orchestrator | 2026-02-20 05:15:13.855164 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 05:15:13.855178 | orchestrator | Friday 20 February 2026 05:14:51 +0000 (0:00:01.116) 0:18:58.736 ******* 2026-02-20 05:15:13.855190 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855200 | orchestrator | 2026-02-20 05:15:13.855212 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 05:15:13.855224 | orchestrator | Friday 20 February 2026 05:14:52 +0000 (0:00:01.132) 0:18:59.868 ******* 2026-02-20 05:15:13.855234 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855247 | orchestrator | 2026-02-20 05:15:13.855259 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 05:15:13.855270 | orchestrator | Friday 20 February 2026 05:14:53 +0000 (0:00:01.128) 0:19:00.997 ******* 2026-02-20 05:15:13.855282 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855294 | orchestrator | 2026-02-20 05:15:13.855306 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 05:15:13.855318 | orchestrator | Friday 20 February 2026 05:14:54 +0000 (0:00:01.121) 0:19:02.119 ******* 2026-02-20 05:15:13.855330 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855342 | orchestrator | 2026-02-20 05:15:13.855355 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 05:15:13.855367 | orchestrator | Friday 20 February 2026 05:14:55 +0000 (0:00:01.118) 0:19:03.238 ******* 2026-02-20 05:15:13.855378 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855392 | orchestrator | 2026-02-20 05:15:13.855407 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 05:15:13.855420 | orchestrator | Friday 20 February 2026 05:14:56 +0000 (0:00:01.101) 0:19:04.339 ******* 2026-02-20 05:15:13.855434 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855445 | orchestrator | 2026-02-20 05:15:13.855453 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 05:15:13.855461 | orchestrator | Friday 20 February 2026 05:14:57 +0000 (0:00:01.126) 0:19:05.465 ******* 2026-02-20 05:15:13.855468 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855476 | orchestrator | 2026-02-20 05:15:13.855492 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 05:15:13.855500 | orchestrator | Friday 20 February 2026 05:14:59 +0000 (0:00:01.121) 0:19:06.587 ******* 2026-02-20 05:15:13.855508 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855516 | orchestrator | 2026-02-20 05:15:13.855524 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 05:15:13.855540 | orchestrator | Friday 20 February 2026 05:15:00 +0000 (0:00:01.198) 0:19:07.786 ******* 2026-02-20 05:15:13.855548 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855556 | orchestrator | 2026-02-20 05:15:13.855564 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 05:15:13.855572 | orchestrator | Friday 20 February 2026 05:15:01 +0000 (0:00:01.145) 0:19:08.931 ******* 2026-02-20 05:15:13.855579 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855587 | orchestrator | 2026-02-20 05:15:13.855595 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 05:15:13.855603 | orchestrator | Friday 20 February 2026 05:15:02 +0000 (0:00:01.154) 0:19:10.086 ******* 2026-02-20 05:15:13.855610 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855655 | orchestrator | 2026-02-20 05:15:13.855664 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 05:15:13.855671 | orchestrator | Friday 20 February 2026 05:15:03 +0000 (0:00:01.116) 0:19:11.202 ******* 2026-02-20 05:15:13.855679 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855687 | orchestrator | 2026-02-20 05:15:13.855695 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 05:15:13.855703 | orchestrator | Friday 20 February 2026 05:15:04 +0000 (0:00:01.165) 0:19:12.368 ******* 2026-02-20 05:15:13.855711 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855719 | orchestrator | 2026-02-20 05:15:13.855727 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 05:15:13.855734 | orchestrator | Friday 20 February 2026 05:15:06 +0000 (0:00:01.165) 0:19:13.534 ******* 2026-02-20 05:15:13.855742 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855750 | orchestrator | 2026-02-20 05:15:13.855758 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 05:15:13.855766 | orchestrator | Friday 20 February 2026 05:15:07 +0000 (0:00:01.105) 0:19:14.639 ******* 2026-02-20 05:15:13.855774 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855782 | orchestrator | 2026-02-20 05:15:13.855789 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 05:15:13.855797 | orchestrator | Friday 20 February 2026 05:15:08 +0000 (0:00:01.131) 0:19:15.771 ******* 2026-02-20 05:15:13.855805 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855813 | orchestrator | 2026-02-20 05:15:13.855821 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 05:15:13.855829 | orchestrator | Friday 20 February 2026 05:15:09 +0000 (0:00:01.107) 0:19:16.878 ******* 2026-02-20 05:15:13.855836 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855844 | orchestrator | 2026-02-20 05:15:13.855852 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 05:15:13.855860 | orchestrator | Friday 20 February 2026 05:15:10 +0000 (0:00:01.134) 0:19:18.012 ******* 2026-02-20 05:15:13.855868 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855875 | orchestrator | 2026-02-20 05:15:13.855883 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 05:15:13.855891 | orchestrator | Friday 20 February 2026 05:15:11 +0000 (0:00:01.121) 0:19:19.133 ******* 2026-02-20 05:15:13.855899 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855907 | orchestrator | 2026-02-20 05:15:13.855914 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 05:15:13.855922 | orchestrator | Friday 20 February 2026 05:15:12 +0000 (0:00:01.087) 0:19:20.220 ******* 2026-02-20 05:15:13.855930 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:13.855938 | orchestrator | 2026-02-20 05:15:13.855948 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 05:15:13.855973 | orchestrator | Friday 20 February 2026 05:15:13 +0000 (0:00:01.105) 0:19:21.326 ******* 2026-02-20 05:15:54.281720 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.281826 | orchestrator | 2026-02-20 05:15:54.281839 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 05:15:54.281874 | orchestrator | Friday 20 February 2026 05:15:14 +0000 (0:00:01.122) 0:19:22.448 ******* 2026-02-20 05:15:54.281884 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.281898 | orchestrator | 2026-02-20 05:15:54.281913 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 05:15:54.281927 | orchestrator | Friday 20 February 2026 05:15:16 +0000 (0:00:01.099) 0:19:23.548 ******* 2026-02-20 05:15:54.281942 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.281956 | orchestrator | 2026-02-20 05:15:54.281971 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 05:15:54.281985 | orchestrator | Friday 20 February 2026 05:15:17 +0000 (0:00:01.124) 0:19:24.672 ******* 2026-02-20 05:15:54.281998 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.282012 | orchestrator | 2026-02-20 05:15:54.282099 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 05:15:54.282115 | orchestrator | Friday 20 February 2026 05:15:18 +0000 (0:00:01.107) 0:19:25.780 ******* 2026-02-20 05:15:54.282130 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.282145 | orchestrator | 2026-02-20 05:15:54.282160 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 05:15:54.282175 | orchestrator | Friday 20 February 2026 05:15:19 +0000 (0:00:01.102) 0:19:26.882 ******* 2026-02-20 05:15:54.282190 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.282205 | orchestrator | 2026-02-20 05:15:54.282217 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 05:15:54.282227 | orchestrator | Friday 20 February 2026 05:15:20 +0000 (0:00:01.128) 0:19:28.010 ******* 2026-02-20 05:15:54.282237 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.282247 | orchestrator | 2026-02-20 05:15:54.282258 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 05:15:54.282281 | orchestrator | Friday 20 February 2026 05:15:21 +0000 (0:00:01.118) 0:19:29.129 ******* 2026-02-20 05:15:54.282292 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.282303 | orchestrator | 2026-02-20 05:15:54.282313 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 05:15:54.282324 | orchestrator | Friday 20 February 2026 05:15:22 +0000 (0:00:01.131) 0:19:30.261 ******* 2026-02-20 05:15:54.282334 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.282344 | orchestrator | 2026-02-20 05:15:54.282355 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 05:15:54.282365 | orchestrator | Friday 20 February 2026 05:15:23 +0000 (0:00:01.183) 0:19:31.444 ******* 2026-02-20 05:15:54.282375 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.282385 | orchestrator | 2026-02-20 05:15:54.282396 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 05:15:54.282406 | orchestrator | Friday 20 February 2026 05:15:25 +0000 (0:00:01.151) 0:19:32.596 ******* 2026-02-20 05:15:54.282416 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.282426 | orchestrator | 2026-02-20 05:15:54.282437 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 05:15:54.282448 | orchestrator | Friday 20 February 2026 05:15:26 +0000 (0:00:01.165) 0:19:33.761 ******* 2026-02-20 05:15:54.282463 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.282477 | orchestrator | 2026-02-20 05:15:54.282489 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 05:15:54.282512 | orchestrator | Friday 20 February 2026 05:15:27 +0000 (0:00:01.104) 0:19:34.865 ******* 2026-02-20 05:15:54.282528 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.282542 | orchestrator | 2026-02-20 05:15:54.282556 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 05:15:54.282570 | orchestrator | Friday 20 February 2026 05:15:28 +0000 (0:00:01.098) 0:19:35.964 ******* 2026-02-20 05:15:54.282584 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.282612 | orchestrator | 2026-02-20 05:15:54.282628 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 05:15:54.282698 | orchestrator | Friday 20 February 2026 05:15:29 +0000 (0:00:01.028) 0:19:36.993 ******* 2026-02-20 05:15:54.282713 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.282727 | orchestrator | 2026-02-20 05:15:54.282741 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 05:15:54.282755 | orchestrator | Friday 20 February 2026 05:15:30 +0000 (0:00:00.989) 0:19:37.982 ******* 2026-02-20 05:15:54.282770 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.282786 | orchestrator | 2026-02-20 05:15:54.282800 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 05:15:54.282815 | orchestrator | Friday 20 February 2026 05:15:31 +0000 (0:00:01.078) 0:19:39.060 ******* 2026-02-20 05:15:54.282829 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.282844 | orchestrator | 2026-02-20 05:15:54.282859 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 05:15:54.282874 | orchestrator | Friday 20 February 2026 05:15:32 +0000 (0:00:01.171) 0:19:40.232 ******* 2026-02-20 05:15:54.282889 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.282903 | orchestrator | 2026-02-20 05:15:54.282918 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 05:15:54.282933 | orchestrator | Friday 20 February 2026 05:15:33 +0000 (0:00:01.088) 0:19:41.320 ******* 2026-02-20 05:15:54.282947 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.282962 | orchestrator | 2026-02-20 05:15:54.282977 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:15:54.282994 | orchestrator | Friday 20 February 2026 05:15:34 +0000 (0:00:01.077) 0:19:42.398 ******* 2026-02-20 05:15:54.283008 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.283023 | orchestrator | 2026-02-20 05:15:54.283037 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:15:54.283076 | orchestrator | Friday 20 February 2026 05:15:35 +0000 (0:00:01.076) 0:19:43.474 ******* 2026-02-20 05:15:54.283092 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.283107 | orchestrator | 2026-02-20 05:15:54.283122 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:15:54.283136 | orchestrator | Friday 20 February 2026 05:15:37 +0000 (0:00:01.096) 0:19:44.571 ******* 2026-02-20 05:15:54.283151 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.283166 | orchestrator | 2026-02-20 05:15:54.283181 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:15:54.283196 | orchestrator | Friday 20 February 2026 05:15:38 +0000 (0:00:01.107) 0:19:45.679 ******* 2026-02-20 05:15:54.283211 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.283225 | orchestrator | 2026-02-20 05:15:54.283240 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:15:54.283255 | orchestrator | Friday 20 February 2026 05:15:39 +0000 (0:00:01.111) 0:19:46.790 ******* 2026-02-20 05:15:54.283270 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-20 05:15:54.283285 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-20 05:15:54.283300 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-20 05:15:54.283314 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.283328 | orchestrator | 2026-02-20 05:15:54.283344 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:15:54.283359 | orchestrator | Friday 20 February 2026 05:15:41 +0000 (0:00:01.703) 0:19:48.494 ******* 2026-02-20 05:15:54.283374 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-20 05:15:54.283389 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-20 05:15:54.283403 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-20 05:15:54.283417 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.283443 | orchestrator | 2026-02-20 05:15:54.283465 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:15:54.283480 | orchestrator | Friday 20 February 2026 05:15:42 +0000 (0:00:01.660) 0:19:50.155 ******* 2026-02-20 05:15:54.283494 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-20 05:15:54.283509 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-20 05:15:54.283523 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-20 05:15:54.283538 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.283552 | orchestrator | 2026-02-20 05:15:54.283568 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:15:54.283582 | orchestrator | Friday 20 February 2026 05:15:44 +0000 (0:00:01.378) 0:19:51.534 ******* 2026-02-20 05:15:54.283597 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.283612 | orchestrator | 2026-02-20 05:15:54.283627 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:15:54.283670 | orchestrator | Friday 20 February 2026 05:15:45 +0000 (0:00:01.087) 0:19:52.622 ******* 2026-02-20 05:15:54.283682 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-20 05:15:54.283691 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.283700 | orchestrator | 2026-02-20 05:15:54.283709 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 05:15:54.283718 | orchestrator | Friday 20 February 2026 05:15:46 +0000 (0:00:01.212) 0:19:53.834 ******* 2026-02-20 05:15:54.283726 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.283735 | orchestrator | 2026-02-20 05:15:54.283744 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-20 05:15:54.283753 | orchestrator | Friday 20 February 2026 05:15:47 +0000 (0:00:01.158) 0:19:54.992 ******* 2026-02-20 05:15:54.283761 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 05:15:54.283770 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 05:15:54.283779 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 05:15:54.283788 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.283796 | orchestrator | 2026-02-20 05:15:54.283805 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-20 05:15:54.283814 | orchestrator | Friday 20 February 2026 05:15:48 +0000 (0:00:01.375) 0:19:56.368 ******* 2026-02-20 05:15:54.283823 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.283832 | orchestrator | 2026-02-20 05:15:54.283840 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-20 05:15:54.283849 | orchestrator | Friday 20 February 2026 05:15:49 +0000 (0:00:01.097) 0:19:57.466 ******* 2026-02-20 05:15:54.283858 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.283867 | orchestrator | 2026-02-20 05:15:54.283876 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-20 05:15:54.283884 | orchestrator | Friday 20 February 2026 05:15:51 +0000 (0:00:01.044) 0:19:58.510 ******* 2026-02-20 05:15:54.283893 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.283902 | orchestrator | 2026-02-20 05:15:54.283910 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-20 05:15:54.283919 | orchestrator | Friday 20 February 2026 05:15:51 +0000 (0:00:00.914) 0:19:59.425 ******* 2026-02-20 05:15:54.283928 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:15:54.283937 | orchestrator | 2026-02-20 05:15:54.283946 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-20 05:15:54.283954 | orchestrator | 2026-02-20 05:15:54.283963 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-20 05:15:54.283972 | orchestrator | Friday 20 February 2026 05:15:52 +0000 (0:00:00.941) 0:20:00.367 ******* 2026-02-20 05:15:54.283980 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:15:54.283989 | orchestrator | 2026-02-20 05:15:54.283998 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:15:54.284020 | orchestrator | Friday 20 February 2026 05:15:53 +0000 (0:00:00.633) 0:20:01.000 ******* 2026-02-20 05:15:54.284035 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:15:54.284050 | orchestrator | 2026-02-20 05:15:54.284075 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 05:16:25.467123 | orchestrator | Friday 20 February 2026 05:15:54 +0000 (0:00:00.751) 0:20:01.752 ******* 2026-02-20 05:16:25.467238 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467252 | orchestrator | 2026-02-20 05:16:25.467261 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 05:16:25.467269 | orchestrator | Friday 20 February 2026 05:15:55 +0000 (0:00:00.734) 0:20:02.486 ******* 2026-02-20 05:16:25.467276 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467284 | orchestrator | 2026-02-20 05:16:25.467292 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 05:16:25.467299 | orchestrator | Friday 20 February 2026 05:15:55 +0000 (0:00:00.757) 0:20:03.244 ******* 2026-02-20 05:16:25.467306 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467314 | orchestrator | 2026-02-20 05:16:25.467321 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 05:16:25.467329 | orchestrator | Friday 20 February 2026 05:15:56 +0000 (0:00:00.783) 0:20:04.027 ******* 2026-02-20 05:16:25.467336 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467343 | orchestrator | 2026-02-20 05:16:25.467351 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 05:16:25.467358 | orchestrator | Friday 20 February 2026 05:15:57 +0000 (0:00:00.749) 0:20:04.776 ******* 2026-02-20 05:16:25.467366 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467373 | orchestrator | 2026-02-20 05:16:25.467381 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 05:16:25.467389 | orchestrator | Friday 20 February 2026 05:15:58 +0000 (0:00:00.743) 0:20:05.519 ******* 2026-02-20 05:16:25.467396 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467404 | orchestrator | 2026-02-20 05:16:25.467417 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 05:16:25.467427 | orchestrator | Friday 20 February 2026 05:15:58 +0000 (0:00:00.756) 0:20:06.275 ******* 2026-02-20 05:16:25.467446 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467458 | orchestrator | 2026-02-20 05:16:25.467488 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 05:16:25.467500 | orchestrator | Friday 20 February 2026 05:15:59 +0000 (0:00:00.765) 0:20:07.041 ******* 2026-02-20 05:16:25.467511 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467522 | orchestrator | 2026-02-20 05:16:25.467534 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 05:16:25.467545 | orchestrator | Friday 20 February 2026 05:16:00 +0000 (0:00:00.779) 0:20:07.821 ******* 2026-02-20 05:16:25.467557 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467570 | orchestrator | 2026-02-20 05:16:25.467583 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 05:16:25.467596 | orchestrator | Friday 20 February 2026 05:16:01 +0000 (0:00:00.761) 0:20:08.583 ******* 2026-02-20 05:16:25.467608 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467622 | orchestrator | 2026-02-20 05:16:25.467636 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 05:16:25.467650 | orchestrator | Friday 20 February 2026 05:16:01 +0000 (0:00:00.850) 0:20:09.433 ******* 2026-02-20 05:16:25.467659 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467695 | orchestrator | 2026-02-20 05:16:25.467704 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 05:16:25.467713 | orchestrator | Friday 20 February 2026 05:16:02 +0000 (0:00:00.774) 0:20:10.207 ******* 2026-02-20 05:16:25.467720 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467728 | orchestrator | 2026-02-20 05:16:25.467735 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 05:16:25.467766 | orchestrator | Friday 20 February 2026 05:16:03 +0000 (0:00:00.802) 0:20:11.010 ******* 2026-02-20 05:16:25.467774 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467781 | orchestrator | 2026-02-20 05:16:25.467788 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 05:16:25.467796 | orchestrator | Friday 20 February 2026 05:16:04 +0000 (0:00:00.764) 0:20:11.774 ******* 2026-02-20 05:16:25.467803 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467810 | orchestrator | 2026-02-20 05:16:25.467817 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 05:16:25.467825 | orchestrator | Friday 20 February 2026 05:16:05 +0000 (0:00:00.768) 0:20:12.543 ******* 2026-02-20 05:16:25.467832 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467839 | orchestrator | 2026-02-20 05:16:25.467846 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 05:16:25.467853 | orchestrator | Friday 20 February 2026 05:16:05 +0000 (0:00:00.847) 0:20:13.391 ******* 2026-02-20 05:16:25.467861 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467868 | orchestrator | 2026-02-20 05:16:25.467875 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 05:16:25.467882 | orchestrator | Friday 20 February 2026 05:16:06 +0000 (0:00:00.755) 0:20:14.146 ******* 2026-02-20 05:16:25.467889 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467897 | orchestrator | 2026-02-20 05:16:25.467904 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 05:16:25.467912 | orchestrator | Friday 20 February 2026 05:16:07 +0000 (0:00:00.761) 0:20:14.908 ******* 2026-02-20 05:16:25.467919 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467926 | orchestrator | 2026-02-20 05:16:25.467934 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 05:16:25.467941 | orchestrator | Friday 20 February 2026 05:16:08 +0000 (0:00:00.781) 0:20:15.689 ******* 2026-02-20 05:16:25.467948 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467955 | orchestrator | 2026-02-20 05:16:25.467963 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 05:16:25.467970 | orchestrator | Friday 20 February 2026 05:16:08 +0000 (0:00:00.767) 0:20:16.457 ******* 2026-02-20 05:16:25.467977 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.467984 | orchestrator | 2026-02-20 05:16:25.467991 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 05:16:25.468016 | orchestrator | Friday 20 February 2026 05:16:09 +0000 (0:00:00.790) 0:20:17.247 ******* 2026-02-20 05:16:25.468024 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468031 | orchestrator | 2026-02-20 05:16:25.468039 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 05:16:25.468046 | orchestrator | Friday 20 February 2026 05:16:10 +0000 (0:00:00.784) 0:20:18.032 ******* 2026-02-20 05:16:25.468053 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468060 | orchestrator | 2026-02-20 05:16:25.468067 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 05:16:25.468075 | orchestrator | Friday 20 February 2026 05:16:11 +0000 (0:00:00.808) 0:20:18.841 ******* 2026-02-20 05:16:25.468082 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468089 | orchestrator | 2026-02-20 05:16:25.468096 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 05:16:25.468103 | orchestrator | Friday 20 February 2026 05:16:12 +0000 (0:00:00.768) 0:20:19.609 ******* 2026-02-20 05:16:25.468110 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468118 | orchestrator | 2026-02-20 05:16:25.468125 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 05:16:25.468132 | orchestrator | Friday 20 February 2026 05:16:12 +0000 (0:00:00.791) 0:20:20.400 ******* 2026-02-20 05:16:25.468139 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468146 | orchestrator | 2026-02-20 05:16:25.468154 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 05:16:25.468170 | orchestrator | Friday 20 February 2026 05:16:13 +0000 (0:00:00.781) 0:20:21.182 ******* 2026-02-20 05:16:25.468182 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468194 | orchestrator | 2026-02-20 05:16:25.468206 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 05:16:25.468217 | orchestrator | Friday 20 February 2026 05:16:14 +0000 (0:00:00.776) 0:20:21.958 ******* 2026-02-20 05:16:25.468229 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468240 | orchestrator | 2026-02-20 05:16:25.468258 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 05:16:25.468271 | orchestrator | Friday 20 February 2026 05:16:15 +0000 (0:00:00.799) 0:20:22.758 ******* 2026-02-20 05:16:25.468282 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468295 | orchestrator | 2026-02-20 05:16:25.468309 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 05:16:25.468321 | orchestrator | Friday 20 February 2026 05:16:16 +0000 (0:00:00.764) 0:20:23.523 ******* 2026-02-20 05:16:25.468334 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468341 | orchestrator | 2026-02-20 05:16:25.468349 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 05:16:25.468356 | orchestrator | Friday 20 February 2026 05:16:16 +0000 (0:00:00.785) 0:20:24.308 ******* 2026-02-20 05:16:25.468363 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468371 | orchestrator | 2026-02-20 05:16:25.468378 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 05:16:25.468385 | orchestrator | Friday 20 February 2026 05:16:17 +0000 (0:00:00.767) 0:20:25.076 ******* 2026-02-20 05:16:25.468392 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468400 | orchestrator | 2026-02-20 05:16:25.468407 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 05:16:25.468414 | orchestrator | Friday 20 February 2026 05:16:18 +0000 (0:00:00.769) 0:20:25.846 ******* 2026-02-20 05:16:25.468421 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468429 | orchestrator | 2026-02-20 05:16:25.468436 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 05:16:25.468443 | orchestrator | Friday 20 February 2026 05:16:19 +0000 (0:00:00.764) 0:20:26.611 ******* 2026-02-20 05:16:25.468451 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468458 | orchestrator | 2026-02-20 05:16:25.468465 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 05:16:25.468472 | orchestrator | Friday 20 February 2026 05:16:19 +0000 (0:00:00.760) 0:20:27.371 ******* 2026-02-20 05:16:25.468480 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468487 | orchestrator | 2026-02-20 05:16:25.468494 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 05:16:25.468501 | orchestrator | Friday 20 February 2026 05:16:20 +0000 (0:00:00.849) 0:20:28.221 ******* 2026-02-20 05:16:25.468509 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468516 | orchestrator | 2026-02-20 05:16:25.468523 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 05:16:25.468532 | orchestrator | Friday 20 February 2026 05:16:21 +0000 (0:00:00.776) 0:20:28.997 ******* 2026-02-20 05:16:25.468545 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468557 | orchestrator | 2026-02-20 05:16:25.468568 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 05:16:25.468581 | orchestrator | Friday 20 February 2026 05:16:22 +0000 (0:00:00.782) 0:20:29.780 ******* 2026-02-20 05:16:25.468592 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468603 | orchestrator | 2026-02-20 05:16:25.468617 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 05:16:25.468631 | orchestrator | Friday 20 February 2026 05:16:23 +0000 (0:00:00.797) 0:20:30.578 ******* 2026-02-20 05:16:25.468644 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468656 | orchestrator | 2026-02-20 05:16:25.468703 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 05:16:25.468711 | orchestrator | Friday 20 February 2026 05:16:23 +0000 (0:00:00.770) 0:20:31.348 ******* 2026-02-20 05:16:25.468718 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468726 | orchestrator | 2026-02-20 05:16:25.468733 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 05:16:25.468741 | orchestrator | Friday 20 February 2026 05:16:24 +0000 (0:00:00.786) 0:20:32.135 ******* 2026-02-20 05:16:25.468748 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:25.468756 | orchestrator | 2026-02-20 05:16:25.468763 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 05:16:25.468779 | orchestrator | Friday 20 February 2026 05:16:25 +0000 (0:00:00.805) 0:20:32.940 ******* 2026-02-20 05:16:54.948802 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.948894 | orchestrator | 2026-02-20 05:16:54.948902 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 05:16:54.948909 | orchestrator | Friday 20 February 2026 05:16:26 +0000 (0:00:00.759) 0:20:33.699 ******* 2026-02-20 05:16:54.948914 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.948919 | orchestrator | 2026-02-20 05:16:54.948924 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 05:16:54.948929 | orchestrator | Friday 20 February 2026 05:16:26 +0000 (0:00:00.751) 0:20:34.451 ******* 2026-02-20 05:16:54.948934 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.948939 | orchestrator | 2026-02-20 05:16:54.948943 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 05:16:54.948948 | orchestrator | Friday 20 February 2026 05:16:27 +0000 (0:00:00.759) 0:20:35.210 ******* 2026-02-20 05:16:54.948953 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.948957 | orchestrator | 2026-02-20 05:16:54.948962 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 05:16:54.948967 | orchestrator | Friday 20 February 2026 05:16:28 +0000 (0:00:00.848) 0:20:36.059 ******* 2026-02-20 05:16:54.948971 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.948976 | orchestrator | 2026-02-20 05:16:54.948981 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 05:16:54.948985 | orchestrator | Friday 20 February 2026 05:16:29 +0000 (0:00:00.763) 0:20:36.822 ******* 2026-02-20 05:16:54.948990 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.948995 | orchestrator | 2026-02-20 05:16:54.948999 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 05:16:54.949004 | orchestrator | Friday 20 February 2026 05:16:30 +0000 (0:00:00.888) 0:20:37.710 ******* 2026-02-20 05:16:54.949009 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949013 | orchestrator | 2026-02-20 05:16:54.949030 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 05:16:54.949036 | orchestrator | Friday 20 February 2026 05:16:30 +0000 (0:00:00.764) 0:20:38.475 ******* 2026-02-20 05:16:54.949040 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949045 | orchestrator | 2026-02-20 05:16:54.949050 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:16:54.949056 | orchestrator | Friday 20 February 2026 05:16:31 +0000 (0:00:00.774) 0:20:39.250 ******* 2026-02-20 05:16:54.949061 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949066 | orchestrator | 2026-02-20 05:16:54.949070 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:16:54.949075 | orchestrator | Friday 20 February 2026 05:16:32 +0000 (0:00:00.766) 0:20:40.016 ******* 2026-02-20 05:16:54.949080 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949084 | orchestrator | 2026-02-20 05:16:54.949089 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:16:54.949093 | orchestrator | Friday 20 February 2026 05:16:33 +0000 (0:00:00.773) 0:20:40.790 ******* 2026-02-20 05:16:54.949113 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949118 | orchestrator | 2026-02-20 05:16:54.949123 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:16:54.949127 | orchestrator | Friday 20 February 2026 05:16:34 +0000 (0:00:00.761) 0:20:41.552 ******* 2026-02-20 05:16:54.949132 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949136 | orchestrator | 2026-02-20 05:16:54.949141 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:16:54.949146 | orchestrator | Friday 20 February 2026 05:16:34 +0000 (0:00:00.749) 0:20:42.301 ******* 2026-02-20 05:16:54.949150 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-20 05:16:54.949155 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-20 05:16:54.949160 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-20 05:16:54.949164 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949169 | orchestrator | 2026-02-20 05:16:54.949174 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:16:54.949178 | orchestrator | Friday 20 February 2026 05:16:35 +0000 (0:00:01.141) 0:20:43.442 ******* 2026-02-20 05:16:54.949183 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-20 05:16:54.949187 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-20 05:16:54.949192 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-20 05:16:54.949196 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949201 | orchestrator | 2026-02-20 05:16:54.949206 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:16:54.949214 | orchestrator | Friday 20 February 2026 05:16:36 +0000 (0:00:01.032) 0:20:44.475 ******* 2026-02-20 05:16:54.949221 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-20 05:16:54.949233 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-20 05:16:54.949242 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-20 05:16:54.949249 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949257 | orchestrator | 2026-02-20 05:16:54.949264 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:16:54.949271 | orchestrator | Friday 20 February 2026 05:16:38 +0000 (0:00:01.047) 0:20:45.523 ******* 2026-02-20 05:16:54.949278 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949286 | orchestrator | 2026-02-20 05:16:54.949292 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:16:54.949299 | orchestrator | Friday 20 February 2026 05:16:38 +0000 (0:00:00.756) 0:20:46.279 ******* 2026-02-20 05:16:54.949306 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-20 05:16:54.949313 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949320 | orchestrator | 2026-02-20 05:16:54.949328 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 05:16:54.949350 | orchestrator | Friday 20 February 2026 05:16:39 +0000 (0:00:00.893) 0:20:47.172 ******* 2026-02-20 05:16:54.949359 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949367 | orchestrator | 2026-02-20 05:16:54.949375 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-20 05:16:54.949384 | orchestrator | Friday 20 February 2026 05:16:40 +0000 (0:00:00.874) 0:20:48.047 ******* 2026-02-20 05:16:54.949392 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-20 05:16:54.949400 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-20 05:16:54.949408 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-20 05:16:54.949416 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949424 | orchestrator | 2026-02-20 05:16:54.949432 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-20 05:16:54.949440 | orchestrator | Friday 20 February 2026 05:16:41 +0000 (0:00:01.054) 0:20:49.102 ******* 2026-02-20 05:16:54.949456 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949464 | orchestrator | 2026-02-20 05:16:54.949472 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-20 05:16:54.949481 | orchestrator | Friday 20 February 2026 05:16:42 +0000 (0:00:00.770) 0:20:49.872 ******* 2026-02-20 05:16:54.949489 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949498 | orchestrator | 2026-02-20 05:16:54.949504 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-20 05:16:54.949509 | orchestrator | Friday 20 February 2026 05:16:43 +0000 (0:00:00.768) 0:20:50.641 ******* 2026-02-20 05:16:54.949515 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949520 | orchestrator | 2026-02-20 05:16:54.949525 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-20 05:16:54.949531 | orchestrator | Friday 20 February 2026 05:16:43 +0000 (0:00:00.753) 0:20:51.394 ******* 2026-02-20 05:16:54.949536 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:16:54.949541 | orchestrator | 2026-02-20 05:16:54.949552 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-20 05:16:54.949557 | orchestrator | 2026-02-20 05:16:54.949562 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-20 05:16:54.949568 | orchestrator | Friday 20 February 2026 05:16:44 +0000 (0:00:00.972) 0:20:52.367 ******* 2026-02-20 05:16:54.949573 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:16:54.949579 | orchestrator | 2026-02-20 05:16:54.949584 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:16:54.949590 | orchestrator | Friday 20 February 2026 05:16:45 +0000 (0:00:00.784) 0:20:53.152 ******* 2026-02-20 05:16:54.949595 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:16:54.949601 | orchestrator | 2026-02-20 05:16:54.949606 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 05:16:54.949612 | orchestrator | Friday 20 February 2026 05:16:46 +0000 (0:00:00.768) 0:20:53.921 ******* 2026-02-20 05:16:54.949617 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:16:54.949622 | orchestrator | 2026-02-20 05:16:54.949628 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 05:16:54.949633 | orchestrator | Friday 20 February 2026 05:16:47 +0000 (0:00:00.766) 0:20:54.688 ******* 2026-02-20 05:16:54.949638 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:16:54.949644 | orchestrator | 2026-02-20 05:16:54.949649 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 05:16:54.949655 | orchestrator | Friday 20 February 2026 05:16:47 +0000 (0:00:00.763) 0:20:55.451 ******* 2026-02-20 05:16:54.949660 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:16:54.949665 | orchestrator | 2026-02-20 05:16:54.949671 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 05:16:54.949676 | orchestrator | Friday 20 February 2026 05:16:48 +0000 (0:00:00.767) 0:20:56.219 ******* 2026-02-20 05:16:54.949702 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:16:54.949707 | orchestrator | 2026-02-20 05:16:54.949712 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 05:16:54.949717 | orchestrator | Friday 20 February 2026 05:16:49 +0000 (0:00:00.756) 0:20:56.976 ******* 2026-02-20 05:16:54.949721 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:16:54.949726 | orchestrator | 2026-02-20 05:16:54.949731 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 05:16:54.949735 | orchestrator | Friday 20 February 2026 05:16:50 +0000 (0:00:00.772) 0:20:57.748 ******* 2026-02-20 05:16:54.949740 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:16:54.949744 | orchestrator | 2026-02-20 05:16:54.949749 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 05:16:54.949754 | orchestrator | Friday 20 February 2026 05:16:51 +0000 (0:00:00.757) 0:20:58.506 ******* 2026-02-20 05:16:54.949758 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:16:54.949763 | orchestrator | 2026-02-20 05:16:54.949768 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 05:16:54.949776 | orchestrator | Friday 20 February 2026 05:16:51 +0000 (0:00:00.764) 0:20:59.271 ******* 2026-02-20 05:16:54.949781 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:16:54.949785 | orchestrator | 2026-02-20 05:16:54.949790 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 05:16:54.949795 | orchestrator | Friday 20 February 2026 05:16:52 +0000 (0:00:00.776) 0:21:00.047 ******* 2026-02-20 05:16:54.949799 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:16:54.949804 | orchestrator | 2026-02-20 05:16:54.949808 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 05:16:54.949813 | orchestrator | Friday 20 February 2026 05:16:53 +0000 (0:00:00.783) 0:21:00.831 ******* 2026-02-20 05:16:54.949818 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:16:54.949822 | orchestrator | 2026-02-20 05:16:54.949827 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 05:16:54.949831 | orchestrator | Friday 20 February 2026 05:16:54 +0000 (0:00:00.804) 0:21:01.636 ******* 2026-02-20 05:16:54.949836 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:16:54.949841 | orchestrator | 2026-02-20 05:16:54.949845 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 05:16:54.949854 | orchestrator | Friday 20 February 2026 05:16:54 +0000 (0:00:00.780) 0:21:02.416 ******* 2026-02-20 05:17:26.235907 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.236055 | orchestrator | 2026-02-20 05:17:26.236082 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 05:17:26.236101 | orchestrator | Friday 20 February 2026 05:16:55 +0000 (0:00:00.811) 0:21:03.228 ******* 2026-02-20 05:17:26.236119 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.236135 | orchestrator | 2026-02-20 05:17:26.236153 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 05:17:26.236171 | orchestrator | Friday 20 February 2026 05:16:56 +0000 (0:00:00.756) 0:21:03.984 ******* 2026-02-20 05:17:26.236190 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.236209 | orchestrator | 2026-02-20 05:17:26.236228 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 05:17:26.236246 | orchestrator | Friday 20 February 2026 05:16:57 +0000 (0:00:00.776) 0:21:04.761 ******* 2026-02-20 05:17:26.236265 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.236284 | orchestrator | 2026-02-20 05:17:26.236303 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 05:17:26.236323 | orchestrator | Friday 20 February 2026 05:16:58 +0000 (0:00:00.789) 0:21:05.550 ******* 2026-02-20 05:17:26.236335 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.236347 | orchestrator | 2026-02-20 05:17:26.236358 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 05:17:26.236369 | orchestrator | Friday 20 February 2026 05:16:58 +0000 (0:00:00.758) 0:21:06.309 ******* 2026-02-20 05:17:26.236380 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.236391 | orchestrator | 2026-02-20 05:17:26.236402 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 05:17:26.236414 | orchestrator | Friday 20 February 2026 05:16:59 +0000 (0:00:00.767) 0:21:07.076 ******* 2026-02-20 05:17:26.236430 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.236454 | orchestrator | 2026-02-20 05:17:26.236503 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 05:17:26.236523 | orchestrator | Friday 20 February 2026 05:17:00 +0000 (0:00:00.765) 0:21:07.842 ******* 2026-02-20 05:17:26.236540 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.236559 | orchestrator | 2026-02-20 05:17:26.236576 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 05:17:26.236594 | orchestrator | Friday 20 February 2026 05:17:01 +0000 (0:00:00.744) 0:21:08.586 ******* 2026-02-20 05:17:26.236612 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.236631 | orchestrator | 2026-02-20 05:17:26.236680 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 05:17:26.236747 | orchestrator | Friday 20 February 2026 05:17:01 +0000 (0:00:00.796) 0:21:09.383 ******* 2026-02-20 05:17:26.236771 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.236789 | orchestrator | 2026-02-20 05:17:26.236807 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 05:17:26.236824 | orchestrator | Friday 20 February 2026 05:17:02 +0000 (0:00:00.783) 0:21:10.167 ******* 2026-02-20 05:17:26.236841 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.236859 | orchestrator | 2026-02-20 05:17:26.236876 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 05:17:26.236894 | orchestrator | Friday 20 February 2026 05:17:03 +0000 (0:00:00.753) 0:21:10.921 ******* 2026-02-20 05:17:26.236912 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.236927 | orchestrator | 2026-02-20 05:17:26.236945 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 05:17:26.236962 | orchestrator | Friday 20 February 2026 05:17:04 +0000 (0:00:00.777) 0:21:11.698 ******* 2026-02-20 05:17:26.236979 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.236995 | orchestrator | 2026-02-20 05:17:26.237011 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 05:17:26.237028 | orchestrator | Friday 20 February 2026 05:17:04 +0000 (0:00:00.765) 0:21:12.464 ******* 2026-02-20 05:17:26.237044 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.237061 | orchestrator | 2026-02-20 05:17:26.237077 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 05:17:26.237094 | orchestrator | Friday 20 February 2026 05:17:05 +0000 (0:00:00.833) 0:21:13.297 ******* 2026-02-20 05:17:26.237113 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.237129 | orchestrator | 2026-02-20 05:17:26.237146 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 05:17:26.237164 | orchestrator | Friday 20 February 2026 05:17:06 +0000 (0:00:00.797) 0:21:14.095 ******* 2026-02-20 05:17:26.237182 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.237199 | orchestrator | 2026-02-20 05:17:26.237216 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 05:17:26.237233 | orchestrator | Friday 20 February 2026 05:17:07 +0000 (0:00:00.763) 0:21:14.858 ******* 2026-02-20 05:17:26.237250 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.237267 | orchestrator | 2026-02-20 05:17:26.237285 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 05:17:26.237303 | orchestrator | Friday 20 February 2026 05:17:08 +0000 (0:00:00.764) 0:21:15.623 ******* 2026-02-20 05:17:26.237321 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.237337 | orchestrator | 2026-02-20 05:17:26.237356 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 05:17:26.237374 | orchestrator | Friday 20 February 2026 05:17:08 +0000 (0:00:00.758) 0:21:16.381 ******* 2026-02-20 05:17:26.237391 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.237408 | orchestrator | 2026-02-20 05:17:26.237427 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 05:17:26.237445 | orchestrator | Friday 20 February 2026 05:17:09 +0000 (0:00:00.817) 0:21:17.199 ******* 2026-02-20 05:17:26.237463 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.237482 | orchestrator | 2026-02-20 05:17:26.237501 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 05:17:26.237520 | orchestrator | Friday 20 February 2026 05:17:10 +0000 (0:00:00.826) 0:21:18.025 ******* 2026-02-20 05:17:26.237540 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.237558 | orchestrator | 2026-02-20 05:17:26.237607 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 05:17:26.237627 | orchestrator | Friday 20 February 2026 05:17:11 +0000 (0:00:00.788) 0:21:18.814 ******* 2026-02-20 05:17:26.237646 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.237674 | orchestrator | 2026-02-20 05:17:26.237686 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 05:17:26.237697 | orchestrator | Friday 20 February 2026 05:17:12 +0000 (0:00:00.779) 0:21:19.593 ******* 2026-02-20 05:17:26.237747 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.237767 | orchestrator | 2026-02-20 05:17:26.237796 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 05:17:26.237815 | orchestrator | Friday 20 February 2026 05:17:12 +0000 (0:00:00.804) 0:21:20.397 ******* 2026-02-20 05:17:26.237833 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.237852 | orchestrator | 2026-02-20 05:17:26.237872 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 05:17:26.237890 | orchestrator | Friday 20 February 2026 05:17:13 +0000 (0:00:00.763) 0:21:21.161 ******* 2026-02-20 05:17:26.237909 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.237922 | orchestrator | 2026-02-20 05:17:26.237933 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 05:17:26.237944 | orchestrator | Friday 20 February 2026 05:17:14 +0000 (0:00:00.755) 0:21:21.917 ******* 2026-02-20 05:17:26.237955 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.237966 | orchestrator | 2026-02-20 05:17:26.237977 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 05:17:26.237990 | orchestrator | Friday 20 February 2026 05:17:15 +0000 (0:00:00.756) 0:21:22.673 ******* 2026-02-20 05:17:26.238001 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.238012 | orchestrator | 2026-02-20 05:17:26.238187 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 05:17:26.238210 | orchestrator | Friday 20 February 2026 05:17:15 +0000 (0:00:00.785) 0:21:23.459 ******* 2026-02-20 05:17:26.238228 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.238247 | orchestrator | 2026-02-20 05:17:26.238259 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 05:17:26.238270 | orchestrator | Friday 20 February 2026 05:17:16 +0000 (0:00:00.766) 0:21:24.225 ******* 2026-02-20 05:17:26.238281 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.238292 | orchestrator | 2026-02-20 05:17:26.238303 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 05:17:26.238314 | orchestrator | Friday 20 February 2026 05:17:17 +0000 (0:00:00.776) 0:21:25.002 ******* 2026-02-20 05:17:26.238325 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.238336 | orchestrator | 2026-02-20 05:17:26.238347 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 05:17:26.238358 | orchestrator | Friday 20 February 2026 05:17:18 +0000 (0:00:00.774) 0:21:25.777 ******* 2026-02-20 05:17:26.238369 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.238380 | orchestrator | 2026-02-20 05:17:26.238391 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 05:17:26.238401 | orchestrator | Friday 20 February 2026 05:17:19 +0000 (0:00:00.753) 0:21:26.530 ******* 2026-02-20 05:17:26.238438 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.238450 | orchestrator | 2026-02-20 05:17:26.238461 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 05:17:26.238472 | orchestrator | Friday 20 February 2026 05:17:19 +0000 (0:00:00.783) 0:21:27.313 ******* 2026-02-20 05:17:26.238483 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.238493 | orchestrator | 2026-02-20 05:17:26.238504 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 05:17:26.238515 | orchestrator | Friday 20 February 2026 05:17:20 +0000 (0:00:00.898) 0:21:28.212 ******* 2026-02-20 05:17:26.238526 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.238537 | orchestrator | 2026-02-20 05:17:26.238548 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 05:17:26.238558 | orchestrator | Friday 20 February 2026 05:17:21 +0000 (0:00:00.774) 0:21:28.986 ******* 2026-02-20 05:17:26.238581 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.238592 | orchestrator | 2026-02-20 05:17:26.238603 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 05:17:26.238619 | orchestrator | Friday 20 February 2026 05:17:22 +0000 (0:00:00.861) 0:21:29.848 ******* 2026-02-20 05:17:26.238637 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.238664 | orchestrator | 2026-02-20 05:17:26.238684 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 05:17:26.238803 | orchestrator | Friday 20 February 2026 05:17:23 +0000 (0:00:00.768) 0:21:30.616 ******* 2026-02-20 05:17:26.238828 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.238847 | orchestrator | 2026-02-20 05:17:26.238863 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:17:26.238882 | orchestrator | Friday 20 February 2026 05:17:23 +0000 (0:00:00.780) 0:21:31.397 ******* 2026-02-20 05:17:26.238899 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.238915 | orchestrator | 2026-02-20 05:17:26.238933 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:17:26.238951 | orchestrator | Friday 20 February 2026 05:17:24 +0000 (0:00:00.780) 0:21:32.177 ******* 2026-02-20 05:17:26.238969 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.238987 | orchestrator | 2026-02-20 05:17:26.239006 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:17:26.239024 | orchestrator | Friday 20 February 2026 05:17:25 +0000 (0:00:00.765) 0:21:32.942 ******* 2026-02-20 05:17:26.239043 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:17:26.239061 | orchestrator | 2026-02-20 05:17:26.239078 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:17:26.239118 | orchestrator | Friday 20 February 2026 05:17:26 +0000 (0:00:00.757) 0:21:33.700 ******* 2026-02-20 05:18:14.499226 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:18:14.499381 | orchestrator | 2026-02-20 05:18:14.499402 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:18:14.499415 | orchestrator | Friday 20 February 2026 05:17:26 +0000 (0:00:00.752) 0:21:34.453 ******* 2026-02-20 05:18:14.499428 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-20 05:18:14.499439 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-20 05:18:14.499450 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-20 05:18:14.499461 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:18:14.499472 | orchestrator | 2026-02-20 05:18:14.499484 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:18:14.499494 | orchestrator | Friday 20 February 2026 05:17:28 +0000 (0:00:01.349) 0:21:35.802 ******* 2026-02-20 05:18:14.499505 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-20 05:18:14.499517 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-20 05:18:14.499527 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-20 05:18:14.499538 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:18:14.499549 | orchestrator | 2026-02-20 05:18:14.499559 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:18:14.499584 | orchestrator | Friday 20 February 2026 05:17:29 +0000 (0:00:01.344) 0:21:37.147 ******* 2026-02-20 05:18:14.499595 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-20 05:18:14.499606 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-20 05:18:14.499617 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-20 05:18:14.499628 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:18:14.499639 | orchestrator | 2026-02-20 05:18:14.499667 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:18:14.499679 | orchestrator | Friday 20 February 2026 05:17:30 +0000 (0:00:01.067) 0:21:38.215 ******* 2026-02-20 05:18:14.499713 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:18:14.499726 | orchestrator | 2026-02-20 05:18:14.499767 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:18:14.499781 | orchestrator | Friday 20 February 2026 05:17:31 +0000 (0:00:00.800) 0:21:39.016 ******* 2026-02-20 05:18:14.499795 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-20 05:18:14.499806 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:18:14.499816 | orchestrator | 2026-02-20 05:18:14.499827 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 05:18:14.499838 | orchestrator | Friday 20 February 2026 05:17:32 +0000 (0:00:00.895) 0:21:39.911 ******* 2026-02-20 05:18:14.499849 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:18:14.499860 | orchestrator | 2026-02-20 05:18:14.499871 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-20 05:18:14.499881 | orchestrator | Friday 20 February 2026 05:17:33 +0000 (0:00:00.793) 0:21:40.705 ******* 2026-02-20 05:18:14.499892 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-20 05:18:14.499903 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-20 05:18:14.499914 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-20 05:18:14.499924 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:18:14.499935 | orchestrator | 2026-02-20 05:18:14.499947 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-20 05:18:14.499958 | orchestrator | Friday 20 February 2026 05:17:34 +0000 (0:00:01.061) 0:21:41.766 ******* 2026-02-20 05:18:14.499969 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:18:14.499980 | orchestrator | 2026-02-20 05:18:14.499990 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-20 05:18:14.500001 | orchestrator | Friday 20 February 2026 05:17:35 +0000 (0:00:00.750) 0:21:42.516 ******* 2026-02-20 05:18:14.500012 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:18:14.500023 | orchestrator | 2026-02-20 05:18:14.500034 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-20 05:18:14.500045 | orchestrator | Friday 20 February 2026 05:17:35 +0000 (0:00:00.837) 0:21:43.354 ******* 2026-02-20 05:18:14.500055 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:18:14.500066 | orchestrator | 2026-02-20 05:18:14.500077 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-20 05:18:14.500088 | orchestrator | Friday 20 February 2026 05:17:36 +0000 (0:00:00.767) 0:21:44.121 ******* 2026-02-20 05:18:14.500099 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:18:14.500110 | orchestrator | 2026-02-20 05:18:14.500121 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-20 05:18:14.500131 | orchestrator | 2026-02-20 05:18:14.500142 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-20 05:18:14.500153 | orchestrator | Friday 20 February 2026 05:17:37 +0000 (0:00:01.337) 0:21:45.459 ******* 2026-02-20 05:18:14.500164 | orchestrator | changed: [testbed-node-0] 2026-02-20 05:18:14.500175 | orchestrator | 2026-02-20 05:18:14.500186 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-20 05:18:14.500202 | orchestrator | Friday 20 February 2026 05:17:50 +0000 (0:00:12.969) 0:21:58.429 ******* 2026-02-20 05:18:14.500221 | orchestrator | changed: [testbed-node-0] 2026-02-20 05:18:14.500238 | orchestrator | 2026-02-20 05:18:14.500255 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:18:14.500272 | orchestrator | Friday 20 February 2026 05:17:53 +0000 (0:00:02.419) 0:22:00.848 ******* 2026-02-20 05:18:14.500288 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-20 05:18:14.500305 | orchestrator | 2026-02-20 05:18:14.500321 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 05:18:14.500337 | orchestrator | Friday 20 February 2026 05:17:54 +0000 (0:00:01.118) 0:22:01.966 ******* 2026-02-20 05:18:14.500354 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:18:14.500386 | orchestrator | 2026-02-20 05:18:14.500406 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 05:18:14.500448 | orchestrator | Friday 20 February 2026 05:17:55 +0000 (0:00:01.496) 0:22:03.463 ******* 2026-02-20 05:18:14.500468 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:18:14.500487 | orchestrator | 2026-02-20 05:18:14.500506 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:18:14.500521 | orchestrator | Friday 20 February 2026 05:17:57 +0000 (0:00:01.097) 0:22:04.561 ******* 2026-02-20 05:18:14.500532 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:18:14.500543 | orchestrator | 2026-02-20 05:18:14.500554 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:18:14.500565 | orchestrator | Friday 20 February 2026 05:17:58 +0000 (0:00:01.441) 0:22:06.003 ******* 2026-02-20 05:18:14.500576 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:18:14.500586 | orchestrator | 2026-02-20 05:18:14.500597 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 05:18:14.500608 | orchestrator | Friday 20 February 2026 05:17:59 +0000 (0:00:01.135) 0:22:07.139 ******* 2026-02-20 05:18:14.500618 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:18:14.500629 | orchestrator | 2026-02-20 05:18:14.500640 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 05:18:14.500651 | orchestrator | Friday 20 February 2026 05:18:00 +0000 (0:00:01.143) 0:22:08.283 ******* 2026-02-20 05:18:14.500662 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:18:14.500672 | orchestrator | 2026-02-20 05:18:14.500683 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 05:18:14.500695 | orchestrator | Friday 20 February 2026 05:18:01 +0000 (0:00:01.130) 0:22:09.413 ******* 2026-02-20 05:18:14.500705 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:14.500716 | orchestrator | 2026-02-20 05:18:14.500727 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 05:18:14.500763 | orchestrator | Friday 20 February 2026 05:18:03 +0000 (0:00:01.140) 0:22:10.553 ******* 2026-02-20 05:18:14.500775 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:18:14.500786 | orchestrator | 2026-02-20 05:18:14.500806 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 05:18:14.500817 | orchestrator | Friday 20 February 2026 05:18:04 +0000 (0:00:01.124) 0:22:11.677 ******* 2026-02-20 05:18:14.500828 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:18:14.500839 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:18:14.500850 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:18:14.500861 | orchestrator | 2026-02-20 05:18:14.500872 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 05:18:14.500883 | orchestrator | Friday 20 February 2026 05:18:06 +0000 (0:00:01.994) 0:22:13.672 ******* 2026-02-20 05:18:14.500894 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:18:14.500905 | orchestrator | 2026-02-20 05:18:14.500915 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 05:18:14.500926 | orchestrator | Friday 20 February 2026 05:18:07 +0000 (0:00:01.227) 0:22:14.899 ******* 2026-02-20 05:18:14.500937 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:18:14.500948 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:18:14.500959 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:18:14.500970 | orchestrator | 2026-02-20 05:18:14.500981 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 05:18:14.500992 | orchestrator | Friday 20 February 2026 05:18:10 +0000 (0:00:02.889) 0:22:17.789 ******* 2026-02-20 05:18:14.501003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 05:18:14.501014 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 05:18:14.501033 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 05:18:14.501044 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:14.501055 | orchestrator | 2026-02-20 05:18:14.501066 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 05:18:14.501077 | orchestrator | Friday 20 February 2026 05:18:11 +0000 (0:00:01.389) 0:22:19.179 ******* 2026-02-20 05:18:14.501090 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 05:18:14.501105 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 05:18:14.501116 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 05:18:14.501128 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:14.501139 | orchestrator | 2026-02-20 05:18:14.501150 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 05:18:14.501161 | orchestrator | Friday 20 February 2026 05:18:13 +0000 (0:00:01.645) 0:22:20.824 ******* 2026-02-20 05:18:14.501183 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:18:34.169527 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:18:34.169669 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:18:34.169688 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:34.169701 | orchestrator | 2026-02-20 05:18:34.169712 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 05:18:34.169723 | orchestrator | Friday 20 February 2026 05:18:14 +0000 (0:00:01.147) 0:22:21.971 ******* 2026-02-20 05:18:34.169838 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'fa7fdf54d9d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 05:18:07.920624', 'end': '2026-02-20 05:18:07.981199', 'delta': '0:00:00.060575', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fa7fdf54d9d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 05:18:34.169862 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9edb2d04dfb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 05:18:08.495351', 'end': '2026-02-20 05:18:08.548448', 'delta': '0:00:00.053097', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9edb2d04dfb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 05:18:34.169900 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '18841505eff4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 05:18:09.129342', 'end': '2026-02-20 05:18:09.194293', 'delta': '0:00:00.064951', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['18841505eff4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 05:18:34.169918 | orchestrator | 2026-02-20 05:18:34.169933 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 05:18:34.169949 | orchestrator | Friday 20 February 2026 05:18:15 +0000 (0:00:01.180) 0:22:23.151 ******* 2026-02-20 05:18:34.169965 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:18:34.169981 | orchestrator | 2026-02-20 05:18:34.169998 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 05:18:34.170015 | orchestrator | Friday 20 February 2026 05:18:16 +0000 (0:00:01.300) 0:22:24.452 ******* 2026-02-20 05:18:34.170103 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:34.170115 | orchestrator | 2026-02-20 05:18:34.170126 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 05:18:34.170137 | orchestrator | Friday 20 February 2026 05:18:18 +0000 (0:00:01.233) 0:22:25.685 ******* 2026-02-20 05:18:34.170149 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:18:34.170160 | orchestrator | 2026-02-20 05:18:34.170171 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 05:18:34.170182 | orchestrator | Friday 20 February 2026 05:18:19 +0000 (0:00:01.101) 0:22:26.787 ******* 2026-02-20 05:18:34.170213 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:18:34.170225 | orchestrator | 2026-02-20 05:18:34.170236 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:18:34.170247 | orchestrator | Friday 20 February 2026 05:18:21 +0000 (0:00:01.984) 0:22:28.772 ******* 2026-02-20 05:18:34.170258 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:18:34.170269 | orchestrator | 2026-02-20 05:18:34.170281 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 05:18:34.170292 | orchestrator | Friday 20 February 2026 05:18:22 +0000 (0:00:01.134) 0:22:29.907 ******* 2026-02-20 05:18:34.170303 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:34.170314 | orchestrator | 2026-02-20 05:18:34.170326 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 05:18:34.170337 | orchestrator | Friday 20 February 2026 05:18:23 +0000 (0:00:01.125) 0:22:31.033 ******* 2026-02-20 05:18:34.170348 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:34.170359 | orchestrator | 2026-02-20 05:18:34.170371 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:18:34.170382 | orchestrator | Friday 20 February 2026 05:18:25 +0000 (0:00:01.514) 0:22:32.548 ******* 2026-02-20 05:18:34.170393 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:34.170403 | orchestrator | 2026-02-20 05:18:34.170413 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 05:18:34.170433 | orchestrator | Friday 20 February 2026 05:18:26 +0000 (0:00:01.122) 0:22:33.670 ******* 2026-02-20 05:18:34.170443 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:34.170453 | orchestrator | 2026-02-20 05:18:34.170462 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 05:18:34.170472 | orchestrator | Friday 20 February 2026 05:18:27 +0000 (0:00:01.127) 0:22:34.798 ******* 2026-02-20 05:18:34.170488 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:34.170498 | orchestrator | 2026-02-20 05:18:34.170507 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 05:18:34.170517 | orchestrator | Friday 20 February 2026 05:18:28 +0000 (0:00:01.141) 0:22:35.939 ******* 2026-02-20 05:18:34.170527 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:34.170536 | orchestrator | 2026-02-20 05:18:34.170546 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 05:18:34.170555 | orchestrator | Friday 20 February 2026 05:18:29 +0000 (0:00:01.098) 0:22:37.037 ******* 2026-02-20 05:18:34.170565 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:34.170575 | orchestrator | 2026-02-20 05:18:34.170584 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 05:18:34.170594 | orchestrator | Friday 20 February 2026 05:18:30 +0000 (0:00:01.113) 0:22:38.150 ******* 2026-02-20 05:18:34.170604 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:34.170613 | orchestrator | 2026-02-20 05:18:34.170623 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 05:18:34.170633 | orchestrator | Friday 20 February 2026 05:18:31 +0000 (0:00:01.147) 0:22:39.298 ******* 2026-02-20 05:18:34.170643 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:34.170653 | orchestrator | 2026-02-20 05:18:34.170662 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 05:18:34.170672 | orchestrator | Friday 20 February 2026 05:18:32 +0000 (0:00:01.116) 0:22:40.414 ******* 2026-02-20 05:18:34.170682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:18:34.170694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:18:34.170704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:18:34.170715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 05:18:34.170734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:18:35.784549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:18:35.784681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:18:35.784698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0c1d2133', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:18:35.784710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:18:35.784717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:18:35.784745 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:35.784778 | orchestrator | 2026-02-20 05:18:35.784786 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 05:18:35.784794 | orchestrator | Friday 20 February 2026 05:18:34 +0000 (0:00:01.223) 0:22:41.638 ******* 2026-02-20 05:18:35.784819 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:18:35.784834 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:18:35.784841 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:18:35.784849 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:18:35.784857 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:18:35.784864 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:18:35.784886 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:18:56.552004 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0c1d2133', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:18:56.552133 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:18:56.552176 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:18:56.552190 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:56.552204 | orchestrator | 2026-02-20 05:18:56.552217 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 05:18:56.552230 | orchestrator | Friday 20 February 2026 05:18:35 +0000 (0:00:01.615) 0:22:43.254 ******* 2026-02-20 05:18:56.552241 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:18:56.552253 | orchestrator | 2026-02-20 05:18:56.552264 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 05:18:56.552275 | orchestrator | Friday 20 February 2026 05:18:37 +0000 (0:00:01.501) 0:22:44.755 ******* 2026-02-20 05:18:56.552286 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:18:56.552297 | orchestrator | 2026-02-20 05:18:56.552308 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:18:56.552336 | orchestrator | Friday 20 February 2026 05:18:38 +0000 (0:00:01.115) 0:22:45.871 ******* 2026-02-20 05:18:56.552348 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:18:56.552359 | orchestrator | 2026-02-20 05:18:56.552370 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:18:56.552381 | orchestrator | Friday 20 February 2026 05:18:39 +0000 (0:00:01.437) 0:22:47.309 ******* 2026-02-20 05:18:56.552392 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:56.552403 | orchestrator | 2026-02-20 05:18:56.552414 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:18:56.552425 | orchestrator | Friday 20 February 2026 05:18:40 +0000 (0:00:01.167) 0:22:48.476 ******* 2026-02-20 05:18:56.552435 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:56.552446 | orchestrator | 2026-02-20 05:18:56.552464 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:18:56.552476 | orchestrator | Friday 20 February 2026 05:18:42 +0000 (0:00:01.224) 0:22:49.701 ******* 2026-02-20 05:18:56.552489 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:56.552502 | orchestrator | 2026-02-20 05:18:56.552515 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 05:18:56.552528 | orchestrator | Friday 20 February 2026 05:18:43 +0000 (0:00:01.113) 0:22:50.814 ******* 2026-02-20 05:18:56.552541 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:18:56.552554 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-20 05:18:56.552567 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-20 05:18:56.552580 | orchestrator | 2026-02-20 05:18:56.552593 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 05:18:56.552607 | orchestrator | Friday 20 February 2026 05:18:45 +0000 (0:00:01.692) 0:22:52.507 ******* 2026-02-20 05:18:56.552620 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 05:18:56.552633 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 05:18:56.552646 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 05:18:56.552658 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:56.552671 | orchestrator | 2026-02-20 05:18:56.552684 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 05:18:56.552697 | orchestrator | Friday 20 February 2026 05:18:46 +0000 (0:00:01.186) 0:22:53.693 ******* 2026-02-20 05:18:56.552710 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:56.552730 | orchestrator | 2026-02-20 05:18:56.552743 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 05:18:56.552756 | orchestrator | Friday 20 February 2026 05:18:47 +0000 (0:00:01.156) 0:22:54.850 ******* 2026-02-20 05:18:56.552802 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:18:56.552816 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:18:56.552828 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:18:56.552839 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:18:56.552850 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:18:56.552861 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:18:56.552872 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:18:56.552883 | orchestrator | 2026-02-20 05:18:56.552894 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 05:18:56.552905 | orchestrator | Friday 20 February 2026 05:18:49 +0000 (0:00:01.743) 0:22:56.593 ******* 2026-02-20 05:18:56.552916 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:18:56.552927 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:18:56.552938 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:18:56.552949 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:18:56.552960 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:18:56.552971 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:18:56.552982 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:18:56.552993 | orchestrator | 2026-02-20 05:18:56.553004 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 05:18:56.553015 | orchestrator | Friday 20 February 2026 05:18:51 +0000 (0:00:02.540) 0:22:59.134 ******* 2026-02-20 05:18:56.553026 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-20 05:18:56.553039 | orchestrator | 2026-02-20 05:18:56.553050 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 05:18:56.553061 | orchestrator | Friday 20 February 2026 05:18:52 +0000 (0:00:01.108) 0:23:00.242 ******* 2026-02-20 05:18:56.553072 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-20 05:18:56.553083 | orchestrator | 2026-02-20 05:18:56.553094 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 05:18:56.553105 | orchestrator | Friday 20 February 2026 05:18:53 +0000 (0:00:01.135) 0:23:01.378 ******* 2026-02-20 05:18:56.553116 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:18:56.553127 | orchestrator | 2026-02-20 05:18:56.553138 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 05:18:56.553149 | orchestrator | Friday 20 February 2026 05:18:55 +0000 (0:00:01.528) 0:23:02.906 ******* 2026-02-20 05:18:56.553160 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:18:56.553171 | orchestrator | 2026-02-20 05:18:56.553188 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 05:19:46.294898 | orchestrator | Friday 20 February 2026 05:18:56 +0000 (0:00:01.116) 0:23:04.023 ******* 2026-02-20 05:19:46.295033 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.295052 | orchestrator | 2026-02-20 05:19:46.295064 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 05:19:46.295074 | orchestrator | Friday 20 February 2026 05:18:57 +0000 (0:00:01.123) 0:23:05.146 ******* 2026-02-20 05:19:46.295084 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.295119 | orchestrator | 2026-02-20 05:19:46.295129 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 05:19:46.295153 | orchestrator | Friday 20 February 2026 05:18:58 +0000 (0:00:01.104) 0:23:06.251 ******* 2026-02-20 05:19:46.295163 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:19:46.295174 | orchestrator | 2026-02-20 05:19:46.295183 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 05:19:46.295193 | orchestrator | Friday 20 February 2026 05:19:00 +0000 (0:00:01.586) 0:23:07.837 ******* 2026-02-20 05:19:46.295202 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.295212 | orchestrator | 2026-02-20 05:19:46.295222 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 05:19:46.295232 | orchestrator | Friday 20 February 2026 05:19:01 +0000 (0:00:01.090) 0:23:08.927 ******* 2026-02-20 05:19:46.295241 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.295251 | orchestrator | 2026-02-20 05:19:46.295263 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 05:19:46.295280 | orchestrator | Friday 20 February 2026 05:19:02 +0000 (0:00:01.167) 0:23:10.095 ******* 2026-02-20 05:19:46.295296 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:19:46.295320 | orchestrator | 2026-02-20 05:19:46.295339 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 05:19:46.295354 | orchestrator | Friday 20 February 2026 05:19:04 +0000 (0:00:01.518) 0:23:11.614 ******* 2026-02-20 05:19:46.295371 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:19:46.295386 | orchestrator | 2026-02-20 05:19:46.295401 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 05:19:46.295415 | orchestrator | Friday 20 February 2026 05:19:05 +0000 (0:00:01.598) 0:23:13.213 ******* 2026-02-20 05:19:46.295433 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.295449 | orchestrator | 2026-02-20 05:19:46.295467 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 05:19:46.295484 | orchestrator | Friday 20 February 2026 05:19:06 +0000 (0:00:01.111) 0:23:14.324 ******* 2026-02-20 05:19:46.295503 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:19:46.295520 | orchestrator | 2026-02-20 05:19:46.295535 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 05:19:46.295547 | orchestrator | Friday 20 February 2026 05:19:07 +0000 (0:00:01.117) 0:23:15.442 ******* 2026-02-20 05:19:46.295559 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.295570 | orchestrator | 2026-02-20 05:19:46.295582 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 05:19:46.295593 | orchestrator | Friday 20 February 2026 05:19:09 +0000 (0:00:01.106) 0:23:16.549 ******* 2026-02-20 05:19:46.295604 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.295615 | orchestrator | 2026-02-20 05:19:46.295626 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 05:19:46.295638 | orchestrator | Friday 20 February 2026 05:19:10 +0000 (0:00:01.100) 0:23:17.649 ******* 2026-02-20 05:19:46.295650 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.295661 | orchestrator | 2026-02-20 05:19:46.295672 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 05:19:46.295683 | orchestrator | Friday 20 February 2026 05:19:11 +0000 (0:00:01.142) 0:23:18.791 ******* 2026-02-20 05:19:46.295694 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.295706 | orchestrator | 2026-02-20 05:19:46.295717 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 05:19:46.295728 | orchestrator | Friday 20 February 2026 05:19:12 +0000 (0:00:01.140) 0:23:19.931 ******* 2026-02-20 05:19:46.295738 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.295748 | orchestrator | 2026-02-20 05:19:46.295757 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 05:19:46.295767 | orchestrator | Friday 20 February 2026 05:19:13 +0000 (0:00:01.116) 0:23:21.048 ******* 2026-02-20 05:19:46.295776 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:19:46.295837 | orchestrator | 2026-02-20 05:19:46.295849 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 05:19:46.295859 | orchestrator | Friday 20 February 2026 05:19:14 +0000 (0:00:01.146) 0:23:22.194 ******* 2026-02-20 05:19:46.295869 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:19:46.295879 | orchestrator | 2026-02-20 05:19:46.295888 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 05:19:46.295898 | orchestrator | Friday 20 February 2026 05:19:15 +0000 (0:00:01.157) 0:23:23.352 ******* 2026-02-20 05:19:46.295908 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:19:46.295917 | orchestrator | 2026-02-20 05:19:46.295927 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 05:19:46.295937 | orchestrator | Friday 20 February 2026 05:19:17 +0000 (0:00:01.184) 0:23:24.537 ******* 2026-02-20 05:19:46.295947 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.295956 | orchestrator | 2026-02-20 05:19:46.295966 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 05:19:46.295976 | orchestrator | Friday 20 February 2026 05:19:18 +0000 (0:00:01.161) 0:23:25.698 ******* 2026-02-20 05:19:46.295985 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.295995 | orchestrator | 2026-02-20 05:19:46.296005 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 05:19:46.296014 | orchestrator | Friday 20 February 2026 05:19:19 +0000 (0:00:01.094) 0:23:26.793 ******* 2026-02-20 05:19:46.296024 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.296034 | orchestrator | 2026-02-20 05:19:46.296043 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 05:19:46.296053 | orchestrator | Friday 20 February 2026 05:19:20 +0000 (0:00:01.177) 0:23:27.971 ******* 2026-02-20 05:19:46.296081 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.296092 | orchestrator | 2026-02-20 05:19:46.296102 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 05:19:46.296111 | orchestrator | Friday 20 February 2026 05:19:21 +0000 (0:00:01.116) 0:23:29.087 ******* 2026-02-20 05:19:46.296121 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.296131 | orchestrator | 2026-02-20 05:19:46.296141 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 05:19:46.296150 | orchestrator | Friday 20 February 2026 05:19:22 +0000 (0:00:01.135) 0:23:30.223 ******* 2026-02-20 05:19:46.296160 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.296169 | orchestrator | 2026-02-20 05:19:46.296186 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 05:19:46.296196 | orchestrator | Friday 20 February 2026 05:19:23 +0000 (0:00:01.107) 0:23:31.331 ******* 2026-02-20 05:19:46.296206 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.296216 | orchestrator | 2026-02-20 05:19:46.296225 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 05:19:46.296236 | orchestrator | Friday 20 February 2026 05:19:24 +0000 (0:00:01.126) 0:23:32.457 ******* 2026-02-20 05:19:46.296245 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.296257 | orchestrator | 2026-02-20 05:19:46.296273 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 05:19:46.296290 | orchestrator | Friday 20 February 2026 05:19:26 +0000 (0:00:01.126) 0:23:33.583 ******* 2026-02-20 05:19:46.296305 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.296320 | orchestrator | 2026-02-20 05:19:46.296336 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 05:19:46.296349 | orchestrator | Friday 20 February 2026 05:19:27 +0000 (0:00:01.123) 0:23:34.707 ******* 2026-02-20 05:19:46.296364 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.296382 | orchestrator | 2026-02-20 05:19:46.296399 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 05:19:46.296416 | orchestrator | Friday 20 February 2026 05:19:28 +0000 (0:00:01.159) 0:23:35.867 ******* 2026-02-20 05:19:46.296444 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.296460 | orchestrator | 2026-02-20 05:19:46.296475 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 05:19:46.296485 | orchestrator | Friday 20 February 2026 05:19:29 +0000 (0:00:01.125) 0:23:36.992 ******* 2026-02-20 05:19:46.296495 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.296505 | orchestrator | 2026-02-20 05:19:46.296514 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 05:19:46.296524 | orchestrator | Friday 20 February 2026 05:19:30 +0000 (0:00:01.093) 0:23:38.086 ******* 2026-02-20 05:19:46.296534 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:19:46.296544 | orchestrator | 2026-02-20 05:19:46.296564 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 05:19:46.296587 | orchestrator | Friday 20 February 2026 05:19:32 +0000 (0:00:01.937) 0:23:40.024 ******* 2026-02-20 05:19:46.296603 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:19:46.296619 | orchestrator | 2026-02-20 05:19:46.296633 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 05:19:46.296648 | orchestrator | Friday 20 February 2026 05:19:35 +0000 (0:00:02.556) 0:23:42.581 ******* 2026-02-20 05:19:46.296663 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-20 05:19:46.296683 | orchestrator | 2026-02-20 05:19:46.296700 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-20 05:19:46.296717 | orchestrator | Friday 20 February 2026 05:19:36 +0000 (0:00:01.190) 0:23:43.772 ******* 2026-02-20 05:19:46.296733 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.296748 | orchestrator | 2026-02-20 05:19:46.296759 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-20 05:19:46.296768 | orchestrator | Friday 20 February 2026 05:19:37 +0000 (0:00:01.109) 0:23:44.881 ******* 2026-02-20 05:19:46.296778 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.296820 | orchestrator | 2026-02-20 05:19:46.296835 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-20 05:19:46.296858 | orchestrator | Friday 20 February 2026 05:19:38 +0000 (0:00:01.114) 0:23:45.996 ******* 2026-02-20 05:19:46.296878 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 05:19:46.296894 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 05:19:46.296910 | orchestrator | 2026-02-20 05:19:46.296924 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-20 05:19:46.296941 | orchestrator | Friday 20 February 2026 05:19:40 +0000 (0:00:01.870) 0:23:47.867 ******* 2026-02-20 05:19:46.296957 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:19:46.296974 | orchestrator | 2026-02-20 05:19:46.296991 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-20 05:19:46.297007 | orchestrator | Friday 20 February 2026 05:19:41 +0000 (0:00:01.443) 0:23:49.311 ******* 2026-02-20 05:19:46.297021 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.297031 | orchestrator | 2026-02-20 05:19:46.297041 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-20 05:19:46.297050 | orchestrator | Friday 20 February 2026 05:19:42 +0000 (0:00:01.121) 0:23:50.432 ******* 2026-02-20 05:19:46.297060 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.297070 | orchestrator | 2026-02-20 05:19:46.297079 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 05:19:46.297089 | orchestrator | Friday 20 February 2026 05:19:44 +0000 (0:00:01.105) 0:23:51.539 ******* 2026-02-20 05:19:46.297099 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:19:46.297109 | orchestrator | 2026-02-20 05:19:46.297118 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 05:19:46.297128 | orchestrator | Friday 20 February 2026 05:19:45 +0000 (0:00:01.110) 0:23:52.649 ******* 2026-02-20 05:19:46.297144 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-20 05:19:46.297181 | orchestrator | 2026-02-20 05:19:46.297212 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-20 05:20:32.370886 | orchestrator | Friday 20 February 2026 05:19:46 +0000 (0:00:01.114) 0:23:53.764 ******* 2026-02-20 05:20:32.371014 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:20:32.371024 | orchestrator | 2026-02-20 05:20:32.371031 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-20 05:20:32.371039 | orchestrator | Friday 20 February 2026 05:19:48 +0000 (0:00:01.848) 0:23:55.613 ******* 2026-02-20 05:20:32.371045 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 05:20:32.371067 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 05:20:32.371073 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 05:20:32.371079 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371086 | orchestrator | 2026-02-20 05:20:32.371092 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-20 05:20:32.371098 | orchestrator | Friday 20 February 2026 05:19:49 +0000 (0:00:01.113) 0:23:56.726 ******* 2026-02-20 05:20:32.371104 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371109 | orchestrator | 2026-02-20 05:20:32.371115 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-20 05:20:32.371120 | orchestrator | Friday 20 February 2026 05:19:50 +0000 (0:00:01.144) 0:23:57.871 ******* 2026-02-20 05:20:32.371126 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371131 | orchestrator | 2026-02-20 05:20:32.371137 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-20 05:20:32.371142 | orchestrator | Friday 20 February 2026 05:19:51 +0000 (0:00:01.159) 0:23:59.031 ******* 2026-02-20 05:20:32.371148 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371153 | orchestrator | 2026-02-20 05:20:32.371159 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-20 05:20:32.371164 | orchestrator | Friday 20 February 2026 05:19:52 +0000 (0:00:01.130) 0:24:00.162 ******* 2026-02-20 05:20:32.371170 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371175 | orchestrator | 2026-02-20 05:20:32.371181 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-20 05:20:32.371186 | orchestrator | Friday 20 February 2026 05:19:53 +0000 (0:00:01.141) 0:24:01.303 ******* 2026-02-20 05:20:32.371191 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371197 | orchestrator | 2026-02-20 05:20:32.371202 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 05:20:32.371208 | orchestrator | Friday 20 February 2026 05:19:54 +0000 (0:00:01.115) 0:24:02.419 ******* 2026-02-20 05:20:32.371213 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:20:32.371219 | orchestrator | 2026-02-20 05:20:32.371224 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 05:20:32.371230 | orchestrator | Friday 20 February 2026 05:19:57 +0000 (0:00:02.543) 0:24:04.962 ******* 2026-02-20 05:20:32.371235 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:20:32.371241 | orchestrator | 2026-02-20 05:20:32.371246 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 05:20:32.371251 | orchestrator | Friday 20 February 2026 05:19:58 +0000 (0:00:01.122) 0:24:06.085 ******* 2026-02-20 05:20:32.371257 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-20 05:20:32.371263 | orchestrator | 2026-02-20 05:20:32.371268 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-20 05:20:32.371273 | orchestrator | Friday 20 February 2026 05:19:59 +0000 (0:00:01.114) 0:24:07.199 ******* 2026-02-20 05:20:32.371280 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371285 | orchestrator | 2026-02-20 05:20:32.371291 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-20 05:20:32.371296 | orchestrator | Friday 20 February 2026 05:20:00 +0000 (0:00:01.194) 0:24:08.394 ******* 2026-02-20 05:20:32.371323 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371329 | orchestrator | 2026-02-20 05:20:32.371334 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-20 05:20:32.371340 | orchestrator | Friday 20 February 2026 05:20:02 +0000 (0:00:01.161) 0:24:09.556 ******* 2026-02-20 05:20:32.371345 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371351 | orchestrator | 2026-02-20 05:20:32.371356 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-20 05:20:32.371361 | orchestrator | Friday 20 February 2026 05:20:03 +0000 (0:00:01.141) 0:24:10.697 ******* 2026-02-20 05:20:32.371367 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371372 | orchestrator | 2026-02-20 05:20:32.371377 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-20 05:20:32.371383 | orchestrator | Friday 20 February 2026 05:20:04 +0000 (0:00:01.121) 0:24:11.819 ******* 2026-02-20 05:20:32.371388 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371393 | orchestrator | 2026-02-20 05:20:32.371399 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-20 05:20:32.371405 | orchestrator | Friday 20 February 2026 05:20:05 +0000 (0:00:01.164) 0:24:12.984 ******* 2026-02-20 05:20:32.371412 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371418 | orchestrator | 2026-02-20 05:20:32.371424 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-20 05:20:32.371431 | orchestrator | Friday 20 February 2026 05:20:06 +0000 (0:00:01.152) 0:24:14.136 ******* 2026-02-20 05:20:32.371437 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371443 | orchestrator | 2026-02-20 05:20:32.371449 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-20 05:20:32.371455 | orchestrator | Friday 20 February 2026 05:20:07 +0000 (0:00:01.129) 0:24:15.265 ******* 2026-02-20 05:20:32.371461 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371467 | orchestrator | 2026-02-20 05:20:32.371474 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-20 05:20:32.371480 | orchestrator | Friday 20 February 2026 05:20:08 +0000 (0:00:01.147) 0:24:16.412 ******* 2026-02-20 05:20:32.371487 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:20:32.371493 | orchestrator | 2026-02-20 05:20:32.371512 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 05:20:32.371519 | orchestrator | Friday 20 February 2026 05:20:10 +0000 (0:00:01.138) 0:24:17.550 ******* 2026-02-20 05:20:32.371525 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-20 05:20:32.371542 | orchestrator | 2026-02-20 05:20:32.371548 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-20 05:20:32.371561 | orchestrator | Friday 20 February 2026 05:20:11 +0000 (0:00:01.217) 0:24:18.767 ******* 2026-02-20 05:20:32.371568 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-20 05:20:32.371575 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-20 05:20:32.371581 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-20 05:20:32.371587 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-20 05:20:32.371594 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-20 05:20:32.371600 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-20 05:20:32.371606 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-20 05:20:32.371612 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-20 05:20:32.371651 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 05:20:32.371658 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 05:20:32.371665 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 05:20:32.371671 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 05:20:32.371678 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 05:20:32.371689 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 05:20:32.371695 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-20 05:20:32.371702 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-20 05:20:32.371708 | orchestrator | 2026-02-20 05:20:32.371714 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 05:20:32.371721 | orchestrator | Friday 20 February 2026 05:20:18 +0000 (0:00:06.884) 0:24:25.652 ******* 2026-02-20 05:20:32.371727 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371734 | orchestrator | 2026-02-20 05:20:32.371740 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 05:20:32.371746 | orchestrator | Friday 20 February 2026 05:20:19 +0000 (0:00:01.108) 0:24:26.761 ******* 2026-02-20 05:20:32.371752 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371758 | orchestrator | 2026-02-20 05:20:32.371763 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 05:20:32.371769 | orchestrator | Friday 20 February 2026 05:20:20 +0000 (0:00:01.206) 0:24:27.968 ******* 2026-02-20 05:20:32.371774 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371780 | orchestrator | 2026-02-20 05:20:32.371785 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 05:20:32.371791 | orchestrator | Friday 20 February 2026 05:20:21 +0000 (0:00:01.076) 0:24:29.044 ******* 2026-02-20 05:20:32.371796 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.371802 | orchestrator | 2026-02-20 05:20:32.371807 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 05:20:32.371813 | orchestrator | Friday 20 February 2026 05:20:22 +0000 (0:00:01.077) 0:24:30.121 ******* 2026-02-20 05:20:32.372096 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.372129 | orchestrator | 2026-02-20 05:20:32.372140 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 05:20:32.372149 | orchestrator | Friday 20 February 2026 05:20:23 +0000 (0:00:00.886) 0:24:31.008 ******* 2026-02-20 05:20:32.372157 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.372166 | orchestrator | 2026-02-20 05:20:32.372175 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 05:20:32.372185 | orchestrator | Friday 20 February 2026 05:20:24 +0000 (0:00:01.073) 0:24:32.081 ******* 2026-02-20 05:20:32.372193 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.372201 | orchestrator | 2026-02-20 05:20:32.372210 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 05:20:32.372218 | orchestrator | Friday 20 February 2026 05:20:25 +0000 (0:00:01.116) 0:24:33.198 ******* 2026-02-20 05:20:32.372226 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.372234 | orchestrator | 2026-02-20 05:20:32.372242 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 05:20:32.372250 | orchestrator | Friday 20 February 2026 05:20:26 +0000 (0:00:01.092) 0:24:34.291 ******* 2026-02-20 05:20:32.372262 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.372276 | orchestrator | 2026-02-20 05:20:32.372288 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 05:20:32.372303 | orchestrator | Friday 20 February 2026 05:20:27 +0000 (0:00:01.109) 0:24:35.400 ******* 2026-02-20 05:20:32.372316 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.372330 | orchestrator | 2026-02-20 05:20:32.372344 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 05:20:32.372353 | orchestrator | Friday 20 February 2026 05:20:29 +0000 (0:00:01.102) 0:24:36.503 ******* 2026-02-20 05:20:32.372360 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.372369 | orchestrator | 2026-02-20 05:20:32.372377 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 05:20:32.372385 | orchestrator | Friday 20 February 2026 05:20:30 +0000 (0:00:01.069) 0:24:37.573 ******* 2026-02-20 05:20:32.372421 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.372429 | orchestrator | 2026-02-20 05:20:32.372437 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 05:20:32.372445 | orchestrator | Friday 20 February 2026 05:20:31 +0000 (0:00:01.079) 0:24:38.652 ******* 2026-02-20 05:20:32.372453 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:20:32.372461 | orchestrator | 2026-02-20 05:20:32.372499 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 05:21:30.098569 | orchestrator | Friday 20 February 2026 05:20:32 +0000 (0:00:01.187) 0:24:39.840 ******* 2026-02-20 05:21:30.098681 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.098693 | orchestrator | 2026-02-20 05:21:30.098701 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 05:21:30.098710 | orchestrator | Friday 20 February 2026 05:20:33 +0000 (0:00:01.125) 0:24:40.965 ******* 2026-02-20 05:21:30.098734 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.098743 | orchestrator | 2026-02-20 05:21:30.098750 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 05:21:30.098757 | orchestrator | Friday 20 February 2026 05:20:34 +0000 (0:00:01.180) 0:24:42.145 ******* 2026-02-20 05:21:30.098764 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.098771 | orchestrator | 2026-02-20 05:21:30.098777 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 05:21:30.098784 | orchestrator | Friday 20 February 2026 05:20:35 +0000 (0:00:01.121) 0:24:43.267 ******* 2026-02-20 05:21:30.098790 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.098797 | orchestrator | 2026-02-20 05:21:30.098808 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:21:30.098820 | orchestrator | Friday 20 February 2026 05:20:36 +0000 (0:00:01.147) 0:24:44.414 ******* 2026-02-20 05:21:30.098831 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.098841 | orchestrator | 2026-02-20 05:21:30.098873 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:21:30.098884 | orchestrator | Friday 20 February 2026 05:20:37 +0000 (0:00:01.060) 0:24:45.474 ******* 2026-02-20 05:21:30.098894 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.098903 | orchestrator | 2026-02-20 05:21:30.098913 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:21:30.098923 | orchestrator | Friday 20 February 2026 05:20:39 +0000 (0:00:01.031) 0:24:46.505 ******* 2026-02-20 05:21:30.098931 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.098941 | orchestrator | 2026-02-20 05:21:30.098951 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:21:30.098961 | orchestrator | Friday 20 February 2026 05:20:39 +0000 (0:00:00.888) 0:24:47.393 ******* 2026-02-20 05:21:30.098972 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.098984 | orchestrator | 2026-02-20 05:21:30.098994 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:21:30.099005 | orchestrator | Friday 20 February 2026 05:20:40 +0000 (0:00:00.893) 0:24:48.287 ******* 2026-02-20 05:21:30.099016 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-20 05:21:30.099028 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-20 05:21:30.099037 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-20 05:21:30.099043 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.099050 | orchestrator | 2026-02-20 05:21:30.099056 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:21:30.099063 | orchestrator | Friday 20 February 2026 05:20:42 +0000 (0:00:01.523) 0:24:49.810 ******* 2026-02-20 05:21:30.099069 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-20 05:21:30.099075 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-20 05:21:30.099101 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-20 05:21:30.099108 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.099115 | orchestrator | 2026-02-20 05:21:30.099123 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:21:30.099130 | orchestrator | Friday 20 February 2026 05:20:43 +0000 (0:00:01.497) 0:24:51.308 ******* 2026-02-20 05:21:30.099139 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-20 05:21:30.099150 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-20 05:21:30.099160 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-20 05:21:30.099170 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.099180 | orchestrator | 2026-02-20 05:21:30.099191 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:21:30.099201 | orchestrator | Friday 20 February 2026 05:20:45 +0000 (0:00:01.538) 0:24:52.846 ******* 2026-02-20 05:21:30.099211 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.099220 | orchestrator | 2026-02-20 05:21:30.099231 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:21:30.099240 | orchestrator | Friday 20 February 2026 05:20:46 +0000 (0:00:01.105) 0:24:53.952 ******* 2026-02-20 05:21:30.099251 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-20 05:21:30.099261 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.099271 | orchestrator | 2026-02-20 05:21:30.099281 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 05:21:30.099292 | orchestrator | Friday 20 February 2026 05:20:47 +0000 (0:00:01.232) 0:24:55.184 ******* 2026-02-20 05:21:30.099302 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:21:30.099313 | orchestrator | 2026-02-20 05:21:30.099323 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-20 05:21:30.099334 | orchestrator | Friday 20 February 2026 05:20:49 +0000 (0:00:01.737) 0:24:56.922 ******* 2026-02-20 05:21:30.099344 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:21:30.099355 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:21:30.099365 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:21:30.099374 | orchestrator | 2026-02-20 05:21:30.099385 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-20 05:21:30.099395 | orchestrator | Friday 20 February 2026 05:20:51 +0000 (0:00:01.645) 0:24:58.568 ******* 2026-02-20 05:21:30.099407 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-02-20 05:21:30.099417 | orchestrator | 2026-02-20 05:21:30.099448 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-20 05:21:30.099459 | orchestrator | Friday 20 February 2026 05:20:52 +0000 (0:00:01.471) 0:25:00.040 ******* 2026-02-20 05:21:30.099465 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:21:30.099472 | orchestrator | 2026-02-20 05:21:30.099478 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-20 05:21:30.099492 | orchestrator | Friday 20 February 2026 05:20:54 +0000 (0:00:01.547) 0:25:01.587 ******* 2026-02-20 05:21:30.099499 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.099505 | orchestrator | 2026-02-20 05:21:30.099511 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-20 05:21:30.099518 | orchestrator | Friday 20 February 2026 05:20:55 +0000 (0:00:01.105) 0:25:02.692 ******* 2026-02-20 05:21:30.099524 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-20 05:21:30.099530 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-20 05:21:30.099536 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-20 05:21:30.099543 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-20 05:21:30.099549 | orchestrator | 2026-02-20 05:21:30.099555 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-20 05:21:30.099561 | orchestrator | Friday 20 February 2026 05:21:02 +0000 (0:00:07.653) 0:25:10.346 ******* 2026-02-20 05:21:30.099577 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:21:30.099584 | orchestrator | 2026-02-20 05:21:30.099590 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-20 05:21:30.099596 | orchestrator | Friday 20 February 2026 05:21:04 +0000 (0:00:01.179) 0:25:11.525 ******* 2026-02-20 05:21:30.099602 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-20 05:21:30.099609 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-20 05:21:30.099615 | orchestrator | 2026-02-20 05:21:30.099625 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-20 05:21:30.099632 | orchestrator | Friday 20 February 2026 05:21:07 +0000 (0:00:03.623) 0:25:15.149 ******* 2026-02-20 05:21:30.099638 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-20 05:21:30.099645 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-20 05:21:30.099651 | orchestrator | 2026-02-20 05:21:30.099657 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-20 05:21:30.099663 | orchestrator | Friday 20 February 2026 05:21:09 +0000 (0:00:02.105) 0:25:17.254 ******* 2026-02-20 05:21:30.099669 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:21:30.099676 | orchestrator | 2026-02-20 05:21:30.099682 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-20 05:21:30.099688 | orchestrator | Friday 20 February 2026 05:21:11 +0000 (0:00:01.500) 0:25:18.754 ******* 2026-02-20 05:21:30.099694 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.099700 | orchestrator | 2026-02-20 05:21:30.099707 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-20 05:21:30.099713 | orchestrator | Friday 20 February 2026 05:21:12 +0000 (0:00:01.114) 0:25:19.869 ******* 2026-02-20 05:21:30.099719 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.099725 | orchestrator | 2026-02-20 05:21:30.099731 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-20 05:21:30.099738 | orchestrator | Friday 20 February 2026 05:21:13 +0000 (0:00:01.146) 0:25:21.015 ******* 2026-02-20 05:21:30.099744 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-02-20 05:21:30.099750 | orchestrator | 2026-02-20 05:21:30.099756 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-20 05:21:30.099762 | orchestrator | Friday 20 February 2026 05:21:14 +0000 (0:00:01.455) 0:25:22.471 ******* 2026-02-20 05:21:30.099769 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.099775 | orchestrator | 2026-02-20 05:21:30.099781 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-20 05:21:30.099787 | orchestrator | Friday 20 February 2026 05:21:16 +0000 (0:00:01.111) 0:25:23.583 ******* 2026-02-20 05:21:30.099793 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.099799 | orchestrator | 2026-02-20 05:21:30.099806 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-20 05:21:30.099812 | orchestrator | Friday 20 February 2026 05:21:17 +0000 (0:00:01.118) 0:25:24.702 ******* 2026-02-20 05:21:30.099818 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-02-20 05:21:30.099824 | orchestrator | 2026-02-20 05:21:30.099830 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-20 05:21:30.099836 | orchestrator | Friday 20 February 2026 05:21:18 +0000 (0:00:01.452) 0:25:26.154 ******* 2026-02-20 05:21:30.099843 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:21:30.099871 | orchestrator | 2026-02-20 05:21:30.099879 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-20 05:21:30.099885 | orchestrator | Friday 20 February 2026 05:21:20 +0000 (0:00:02.019) 0:25:28.174 ******* 2026-02-20 05:21:30.099892 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:21:30.099898 | orchestrator | 2026-02-20 05:21:30.099904 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-20 05:21:30.099910 | orchestrator | Friday 20 February 2026 05:21:22 +0000 (0:00:01.958) 0:25:30.132 ******* 2026-02-20 05:21:30.099922 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:21:30.099929 | orchestrator | 2026-02-20 05:21:30.099935 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-20 05:21:30.099942 | orchestrator | Friday 20 February 2026 05:21:25 +0000 (0:00:02.487) 0:25:32.619 ******* 2026-02-20 05:21:30.099948 | orchestrator | changed: [testbed-node-0] 2026-02-20 05:21:30.099954 | orchestrator | 2026-02-20 05:21:30.099961 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-20 05:21:30.099967 | orchestrator | Friday 20 February 2026 05:21:29 +0000 (0:00:03.920) 0:25:36.540 ******* 2026-02-20 05:21:30.099973 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:21:30.099979 | orchestrator | 2026-02-20 05:21:30.099986 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-20 05:21:30.099992 | orchestrator | 2026-02-20 05:21:30.100003 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-20 05:21:56.962235 | orchestrator | Friday 20 February 2026 05:21:30 +0000 (0:00:01.032) 0:25:37.572 ******* 2026-02-20 05:21:56.962365 | orchestrator | changed: [testbed-node-1] 2026-02-20 05:21:56.962383 | orchestrator | 2026-02-20 05:21:56.962397 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-20 05:21:56.962409 | orchestrator | Friday 20 February 2026 05:21:32 +0000 (0:00:02.541) 0:25:40.114 ******* 2026-02-20 05:21:56.962438 | orchestrator | changed: [testbed-node-1] 2026-02-20 05:21:56.962457 | orchestrator | 2026-02-20 05:21:56.962476 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:21:56.962495 | orchestrator | Friday 20 February 2026 05:21:34 +0000 (0:00:02.167) 0:25:42.282 ******* 2026-02-20 05:21:56.962513 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-20 05:21:56.962531 | orchestrator | 2026-02-20 05:21:56.962550 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 05:21:56.962568 | orchestrator | Friday 20 February 2026 05:21:35 +0000 (0:00:01.101) 0:25:43.384 ******* 2026-02-20 05:21:56.962586 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:21:56.962605 | orchestrator | 2026-02-20 05:21:56.962623 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 05:21:56.962641 | orchestrator | Friday 20 February 2026 05:21:37 +0000 (0:00:01.461) 0:25:44.845 ******* 2026-02-20 05:21:56.962659 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:21:56.962677 | orchestrator | 2026-02-20 05:21:56.962697 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:21:56.962717 | orchestrator | Friday 20 February 2026 05:21:38 +0000 (0:00:01.189) 0:25:46.035 ******* 2026-02-20 05:21:56.962734 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:21:56.962752 | orchestrator | 2026-02-20 05:21:56.962770 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:21:56.962788 | orchestrator | Friday 20 February 2026 05:21:39 +0000 (0:00:01.428) 0:25:47.463 ******* 2026-02-20 05:21:56.962806 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:21:56.962826 | orchestrator | 2026-02-20 05:21:56.962842 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 05:21:56.962858 | orchestrator | Friday 20 February 2026 05:21:41 +0000 (0:00:01.154) 0:25:48.618 ******* 2026-02-20 05:21:56.962955 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:21:56.962977 | orchestrator | 2026-02-20 05:21:56.962995 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 05:21:56.963013 | orchestrator | Friday 20 February 2026 05:21:42 +0000 (0:00:01.105) 0:25:49.724 ******* 2026-02-20 05:21:56.963031 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:21:56.963049 | orchestrator | 2026-02-20 05:21:56.963071 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 05:21:56.963091 | orchestrator | Friday 20 February 2026 05:21:43 +0000 (0:00:01.171) 0:25:50.895 ******* 2026-02-20 05:21:56.963108 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:21:56.963124 | orchestrator | 2026-02-20 05:21:56.963174 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 05:21:56.963195 | orchestrator | Friday 20 February 2026 05:21:44 +0000 (0:00:01.110) 0:25:52.005 ******* 2026-02-20 05:21:56.963214 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:21:56.963232 | orchestrator | 2026-02-20 05:21:56.963250 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 05:21:56.963269 | orchestrator | Friday 20 February 2026 05:21:45 +0000 (0:00:01.129) 0:25:53.135 ******* 2026-02-20 05:21:56.963289 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:21:56.963307 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-20 05:21:56.963326 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:21:56.963344 | orchestrator | 2026-02-20 05:21:56.963362 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 05:21:56.963381 | orchestrator | Friday 20 February 2026 05:21:47 +0000 (0:00:01.697) 0:25:54.832 ******* 2026-02-20 05:21:56.963399 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:21:56.963418 | orchestrator | 2026-02-20 05:21:56.963437 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 05:21:56.963455 | orchestrator | Friday 20 February 2026 05:21:48 +0000 (0:00:01.234) 0:25:56.067 ******* 2026-02-20 05:21:56.963471 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:21:56.963482 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-20 05:21:56.963494 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:21:56.963505 | orchestrator | 2026-02-20 05:21:56.963515 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 05:21:56.963527 | orchestrator | Friday 20 February 2026 05:21:51 +0000 (0:00:02.726) 0:25:58.794 ******* 2026-02-20 05:21:56.963561 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-20 05:21:56.963586 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-20 05:21:56.963612 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-20 05:21:56.963631 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:21:56.963649 | orchestrator | 2026-02-20 05:21:56.963667 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 05:21:56.963686 | orchestrator | Friday 20 February 2026 05:21:52 +0000 (0:00:01.429) 0:26:00.223 ******* 2026-02-20 05:21:56.963709 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 05:21:56.963761 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 05:21:56.963796 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 05:21:56.963817 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:21:56.963835 | orchestrator | 2026-02-20 05:21:56.963855 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 05:21:56.963914 | orchestrator | Friday 20 February 2026 05:21:54 +0000 (0:00:01.892) 0:26:02.116 ******* 2026-02-20 05:21:56.963936 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:21:56.963970 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:21:56.963983 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:21:56.963996 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:21:56.964009 | orchestrator | 2026-02-20 05:21:56.964022 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 05:21:56.964035 | orchestrator | Friday 20 February 2026 05:21:55 +0000 (0:00:01.142) 0:26:03.259 ******* 2026-02-20 05:21:56.964050 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'fa7fdf54d9d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 05:21:49.082466', 'end': '2026-02-20 05:21:49.127120', 'delta': '0:00:00.044654', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fa7fdf54d9d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 05:21:56.964066 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '9edb2d04dfb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 05:21:49.611526', 'end': '2026-02-20 05:21:49.656625', 'delta': '0:00:00.045099', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9edb2d04dfb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 05:21:56.964091 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '18841505eff4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 05:21:50.148306', 'end': '2026-02-20 05:21:50.190655', 'delta': '0:00:00.042349', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['18841505eff4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 05:22:15.142498 | orchestrator | 2026-02-20 05:22:15.142624 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 05:22:15.142666 | orchestrator | Friday 20 February 2026 05:21:56 +0000 (0:00:01.172) 0:26:04.432 ******* 2026-02-20 05:22:15.142684 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:22:15.142697 | orchestrator | 2026-02-20 05:22:15.142709 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 05:22:15.142748 | orchestrator | Friday 20 February 2026 05:21:58 +0000 (0:00:01.236) 0:26:05.669 ******* 2026-02-20 05:22:15.142761 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:15.142775 | orchestrator | 2026-02-20 05:22:15.142787 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 05:22:15.142799 | orchestrator | Friday 20 February 2026 05:21:59 +0000 (0:00:01.217) 0:26:06.887 ******* 2026-02-20 05:22:15.142811 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:22:15.142823 | orchestrator | 2026-02-20 05:22:15.142834 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 05:22:15.142847 | orchestrator | Friday 20 February 2026 05:22:00 +0000 (0:00:01.150) 0:26:08.037 ******* 2026-02-20 05:22:15.142859 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:22:15.142871 | orchestrator | 2026-02-20 05:22:15.142944 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:22:15.142959 | orchestrator | Friday 20 February 2026 05:22:02 +0000 (0:00:01.963) 0:26:10.001 ******* 2026-02-20 05:22:15.142971 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:22:15.142983 | orchestrator | 2026-02-20 05:22:15.142996 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 05:22:15.143010 | orchestrator | Friday 20 February 2026 05:22:03 +0000 (0:00:01.151) 0:26:11.153 ******* 2026-02-20 05:22:15.143022 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:15.143035 | orchestrator | 2026-02-20 05:22:15.143045 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 05:22:15.143054 | orchestrator | Friday 20 February 2026 05:22:04 +0000 (0:00:01.104) 0:26:12.257 ******* 2026-02-20 05:22:15.143062 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:15.143071 | orchestrator | 2026-02-20 05:22:15.143079 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:22:15.143087 | orchestrator | Friday 20 February 2026 05:22:05 +0000 (0:00:01.208) 0:26:13.466 ******* 2026-02-20 05:22:15.143095 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:15.143104 | orchestrator | 2026-02-20 05:22:15.143112 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 05:22:15.143120 | orchestrator | Friday 20 February 2026 05:22:07 +0000 (0:00:01.157) 0:26:14.623 ******* 2026-02-20 05:22:15.143128 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:15.143136 | orchestrator | 2026-02-20 05:22:15.143145 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 05:22:15.143153 | orchestrator | Friday 20 February 2026 05:22:08 +0000 (0:00:01.114) 0:26:15.738 ******* 2026-02-20 05:22:15.143161 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:15.143170 | orchestrator | 2026-02-20 05:22:15.143178 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 05:22:15.143187 | orchestrator | Friday 20 February 2026 05:22:09 +0000 (0:00:01.117) 0:26:16.856 ******* 2026-02-20 05:22:15.143194 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:15.143201 | orchestrator | 2026-02-20 05:22:15.143209 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 05:22:15.143216 | orchestrator | Friday 20 February 2026 05:22:10 +0000 (0:00:01.142) 0:26:17.998 ******* 2026-02-20 05:22:15.143223 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:15.143230 | orchestrator | 2026-02-20 05:22:15.143237 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 05:22:15.143244 | orchestrator | Friday 20 February 2026 05:22:11 +0000 (0:00:01.120) 0:26:19.118 ******* 2026-02-20 05:22:15.143251 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:15.143259 | orchestrator | 2026-02-20 05:22:15.143266 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 05:22:15.143274 | orchestrator | Friday 20 February 2026 05:22:12 +0000 (0:00:01.117) 0:26:20.236 ******* 2026-02-20 05:22:15.143281 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:15.143288 | orchestrator | 2026-02-20 05:22:15.143296 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 05:22:15.143312 | orchestrator | Friday 20 February 2026 05:22:13 +0000 (0:00:01.124) 0:26:21.360 ******* 2026-02-20 05:22:15.143322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:22:15.143333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:22:15.143366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:22:15.143377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-29-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 05:22:15.143387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:22:15.143395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:22:15.143403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:22:15.143420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6a45b1b5', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:22:16.382925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:22:16.382997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:22:16.383004 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:16.383011 | orchestrator | 2026-02-20 05:22:16.383017 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 05:22:16.383023 | orchestrator | Friday 20 February 2026 05:22:15 +0000 (0:00:01.247) 0:26:22.608 ******* 2026-02-20 05:22:16.383031 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:22:16.383038 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:22:16.383057 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:22:16.383063 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-29-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:22:16.383087 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:22:16.383092 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:22:16.383097 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:22:16.383104 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6a45b1b5', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a45b1b5-2fa2-48ee-bac2-1b370ef97102-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:22:16.383122 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:22:52.637065 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:22:52.637178 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.637191 | orchestrator | 2026-02-20 05:22:52.637200 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 05:22:52.637209 | orchestrator | Friday 20 February 2026 05:22:16 +0000 (0:00:01.244) 0:26:23.853 ******* 2026-02-20 05:22:52.637216 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:22:52.637224 | orchestrator | 2026-02-20 05:22:52.637232 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 05:22:52.637239 | orchestrator | Friday 20 February 2026 05:22:17 +0000 (0:00:01.502) 0:26:25.355 ******* 2026-02-20 05:22:52.637246 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:22:52.637253 | orchestrator | 2026-02-20 05:22:52.637260 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:22:52.637266 | orchestrator | Friday 20 February 2026 05:22:19 +0000 (0:00:01.139) 0:26:26.495 ******* 2026-02-20 05:22:52.637274 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:22:52.637281 | orchestrator | 2026-02-20 05:22:52.637288 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:22:52.637317 | orchestrator | Friday 20 February 2026 05:22:20 +0000 (0:00:01.492) 0:26:27.988 ******* 2026-02-20 05:22:52.637325 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.637332 | orchestrator | 2026-02-20 05:22:52.637339 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:22:52.637346 | orchestrator | Friday 20 February 2026 05:22:21 +0000 (0:00:01.120) 0:26:29.108 ******* 2026-02-20 05:22:52.637353 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.637360 | orchestrator | 2026-02-20 05:22:52.637367 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:22:52.637374 | orchestrator | Friday 20 February 2026 05:22:22 +0000 (0:00:01.235) 0:26:30.344 ******* 2026-02-20 05:22:52.637381 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.637388 | orchestrator | 2026-02-20 05:22:52.637396 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 05:22:52.637408 | orchestrator | Friday 20 February 2026 05:22:24 +0000 (0:00:01.227) 0:26:31.572 ******* 2026-02-20 05:22:52.637425 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-20 05:22:52.637439 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-20 05:22:52.637450 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-20 05:22:52.637461 | orchestrator | 2026-02-20 05:22:52.637472 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 05:22:52.637481 | orchestrator | Friday 20 February 2026 05:22:25 +0000 (0:00:01.623) 0:26:33.196 ******* 2026-02-20 05:22:52.637493 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-20 05:22:52.637504 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-20 05:22:52.637516 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-20 05:22:52.637526 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.637537 | orchestrator | 2026-02-20 05:22:52.637548 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 05:22:52.637559 | orchestrator | Friday 20 February 2026 05:22:26 +0000 (0:00:01.159) 0:26:34.356 ******* 2026-02-20 05:22:52.637571 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.637582 | orchestrator | 2026-02-20 05:22:52.637593 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 05:22:52.637603 | orchestrator | Friday 20 February 2026 05:22:27 +0000 (0:00:01.100) 0:26:35.456 ******* 2026-02-20 05:22:52.637610 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:22:52.637618 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-20 05:22:52.637624 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:22:52.637631 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:22:52.637638 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:22:52.637645 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:22:52.637651 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:22:52.637658 | orchestrator | 2026-02-20 05:22:52.637665 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 05:22:52.637671 | orchestrator | Friday 20 February 2026 05:22:30 +0000 (0:00:02.138) 0:26:37.595 ******* 2026-02-20 05:22:52.637678 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:22:52.637685 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-20 05:22:52.637703 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:22:52.637710 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:22:52.637732 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:22:52.637748 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:22:52.637755 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:22:52.637762 | orchestrator | 2026-02-20 05:22:52.637769 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 05:22:52.637776 | orchestrator | Friday 20 February 2026 05:22:32 +0000 (0:00:02.151) 0:26:39.746 ******* 2026-02-20 05:22:52.637782 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-20 05:22:52.637790 | orchestrator | 2026-02-20 05:22:52.637797 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 05:22:52.637804 | orchestrator | Friday 20 February 2026 05:22:33 +0000 (0:00:01.220) 0:26:40.967 ******* 2026-02-20 05:22:52.637810 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-20 05:22:52.637817 | orchestrator | 2026-02-20 05:22:52.637824 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 05:22:52.637830 | orchestrator | Friday 20 February 2026 05:22:34 +0000 (0:00:01.188) 0:26:42.155 ******* 2026-02-20 05:22:52.637837 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:22:52.637844 | orchestrator | 2026-02-20 05:22:52.637851 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 05:22:52.637857 | orchestrator | Friday 20 February 2026 05:22:36 +0000 (0:00:01.560) 0:26:43.716 ******* 2026-02-20 05:22:52.637864 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.637871 | orchestrator | 2026-02-20 05:22:52.637877 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 05:22:52.637884 | orchestrator | Friday 20 February 2026 05:22:37 +0000 (0:00:01.108) 0:26:44.825 ******* 2026-02-20 05:22:52.637890 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.637897 | orchestrator | 2026-02-20 05:22:52.637924 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 05:22:52.637931 | orchestrator | Friday 20 February 2026 05:22:38 +0000 (0:00:01.150) 0:26:45.976 ******* 2026-02-20 05:22:52.637938 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.637945 | orchestrator | 2026-02-20 05:22:52.637951 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 05:22:52.637958 | orchestrator | Friday 20 February 2026 05:22:39 +0000 (0:00:01.087) 0:26:47.064 ******* 2026-02-20 05:22:52.637965 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:22:52.637971 | orchestrator | 2026-02-20 05:22:52.637978 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 05:22:52.637984 | orchestrator | Friday 20 February 2026 05:22:41 +0000 (0:00:01.581) 0:26:48.645 ******* 2026-02-20 05:22:52.637991 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.637998 | orchestrator | 2026-02-20 05:22:52.638004 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 05:22:52.638011 | orchestrator | Friday 20 February 2026 05:22:42 +0000 (0:00:01.101) 0:26:49.747 ******* 2026-02-20 05:22:52.638091 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.638104 | orchestrator | 2026-02-20 05:22:52.638116 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 05:22:52.638128 | orchestrator | Friday 20 February 2026 05:22:43 +0000 (0:00:01.105) 0:26:50.853 ******* 2026-02-20 05:22:52.638140 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:22:52.638149 | orchestrator | 2026-02-20 05:22:52.638156 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 05:22:52.638163 | orchestrator | Friday 20 February 2026 05:22:44 +0000 (0:00:01.566) 0:26:52.419 ******* 2026-02-20 05:22:52.638170 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:22:52.638177 | orchestrator | 2026-02-20 05:22:52.638183 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 05:22:52.638190 | orchestrator | Friday 20 February 2026 05:22:46 +0000 (0:00:01.519) 0:26:53.939 ******* 2026-02-20 05:22:52.638197 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.638211 | orchestrator | 2026-02-20 05:22:52.638217 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 05:22:52.638224 | orchestrator | Friday 20 February 2026 05:22:47 +0000 (0:00:00.761) 0:26:54.701 ******* 2026-02-20 05:22:52.638231 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:22:52.638237 | orchestrator | 2026-02-20 05:22:52.638244 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 05:22:52.638251 | orchestrator | Friday 20 February 2026 05:22:48 +0000 (0:00:00.798) 0:26:55.500 ******* 2026-02-20 05:22:52.638258 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.638264 | orchestrator | 2026-02-20 05:22:52.638271 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 05:22:52.638278 | orchestrator | Friday 20 February 2026 05:22:48 +0000 (0:00:00.769) 0:26:56.269 ******* 2026-02-20 05:22:52.638284 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.638291 | orchestrator | 2026-02-20 05:22:52.638298 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 05:22:52.638304 | orchestrator | Friday 20 February 2026 05:22:49 +0000 (0:00:00.760) 0:26:57.030 ******* 2026-02-20 05:22:52.638311 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.638318 | orchestrator | 2026-02-20 05:22:52.638324 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 05:22:52.638331 | orchestrator | Friday 20 February 2026 05:22:50 +0000 (0:00:00.761) 0:26:57.792 ******* 2026-02-20 05:22:52.638338 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.638344 | orchestrator | 2026-02-20 05:22:52.638351 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 05:22:52.638363 | orchestrator | Friday 20 February 2026 05:22:51 +0000 (0:00:00.752) 0:26:58.545 ******* 2026-02-20 05:22:52.638370 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:22:52.638377 | orchestrator | 2026-02-20 05:22:52.638383 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 05:22:52.638390 | orchestrator | Friday 20 February 2026 05:22:51 +0000 (0:00:00.777) 0:26:59.323 ******* 2026-02-20 05:22:52.638403 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:23:33.924716 | orchestrator | 2026-02-20 05:23:33.924809 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 05:23:33.924819 | orchestrator | Friday 20 February 2026 05:22:52 +0000 (0:00:00.786) 0:27:00.109 ******* 2026-02-20 05:23:33.924825 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:23:33.924832 | orchestrator | 2026-02-20 05:23:33.924839 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 05:23:33.924845 | orchestrator | Friday 20 February 2026 05:22:53 +0000 (0:00:00.803) 0:27:00.913 ******* 2026-02-20 05:23:33.924851 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:23:33.924857 | orchestrator | 2026-02-20 05:23:33.924862 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 05:23:33.924868 | orchestrator | Friday 20 February 2026 05:22:54 +0000 (0:00:00.796) 0:27:01.709 ******* 2026-02-20 05:23:33.924874 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.924881 | orchestrator | 2026-02-20 05:23:33.924887 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 05:23:33.924892 | orchestrator | Friday 20 February 2026 05:22:54 +0000 (0:00:00.751) 0:27:02.461 ******* 2026-02-20 05:23:33.924898 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.924904 | orchestrator | 2026-02-20 05:23:33.924910 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 05:23:33.924915 | orchestrator | Friday 20 February 2026 05:22:55 +0000 (0:00:00.777) 0:27:03.238 ******* 2026-02-20 05:23:33.924952 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.924961 | orchestrator | 2026-02-20 05:23:33.924970 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 05:23:33.924986 | orchestrator | Friday 20 February 2026 05:22:56 +0000 (0:00:00.811) 0:27:04.049 ******* 2026-02-20 05:23:33.924994 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925025 | orchestrator | 2026-02-20 05:23:33.925035 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 05:23:33.925044 | orchestrator | Friday 20 February 2026 05:22:57 +0000 (0:00:00.781) 0:27:04.831 ******* 2026-02-20 05:23:33.925054 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925063 | orchestrator | 2026-02-20 05:23:33.925072 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 05:23:33.925080 | orchestrator | Friday 20 February 2026 05:22:58 +0000 (0:00:00.792) 0:27:05.623 ******* 2026-02-20 05:23:33.925085 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925091 | orchestrator | 2026-02-20 05:23:33.925096 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 05:23:33.925102 | orchestrator | Friday 20 February 2026 05:22:58 +0000 (0:00:00.807) 0:27:06.430 ******* 2026-02-20 05:23:33.925108 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925113 | orchestrator | 2026-02-20 05:23:33.925119 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 05:23:33.925125 | orchestrator | Friday 20 February 2026 05:22:59 +0000 (0:00:00.756) 0:27:07.187 ******* 2026-02-20 05:23:33.925130 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925136 | orchestrator | 2026-02-20 05:23:33.925141 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 05:23:33.925147 | orchestrator | Friday 20 February 2026 05:23:00 +0000 (0:00:00.788) 0:27:07.975 ******* 2026-02-20 05:23:33.925152 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925158 | orchestrator | 2026-02-20 05:23:33.925163 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 05:23:33.925169 | orchestrator | Friday 20 February 2026 05:23:01 +0000 (0:00:00.753) 0:27:08.728 ******* 2026-02-20 05:23:33.925174 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925180 | orchestrator | 2026-02-20 05:23:33.925186 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 05:23:33.925191 | orchestrator | Friday 20 February 2026 05:23:02 +0000 (0:00:00.765) 0:27:09.494 ******* 2026-02-20 05:23:33.925197 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925206 | orchestrator | 2026-02-20 05:23:33.925215 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 05:23:33.925224 | orchestrator | Friday 20 February 2026 05:23:02 +0000 (0:00:00.753) 0:27:10.247 ******* 2026-02-20 05:23:33.925232 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925241 | orchestrator | 2026-02-20 05:23:33.925249 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 05:23:33.925258 | orchestrator | Friday 20 February 2026 05:23:03 +0000 (0:00:00.757) 0:27:11.005 ******* 2026-02-20 05:23:33.925267 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:23:33.925277 | orchestrator | 2026-02-20 05:23:33.925285 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 05:23:33.925293 | orchestrator | Friday 20 February 2026 05:23:05 +0000 (0:00:01.669) 0:27:12.674 ******* 2026-02-20 05:23:33.925302 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:23:33.925311 | orchestrator | 2026-02-20 05:23:33.925320 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 05:23:33.925329 | orchestrator | Friday 20 February 2026 05:23:07 +0000 (0:00:02.063) 0:27:14.738 ******* 2026-02-20 05:23:33.925339 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-20 05:23:33.925350 | orchestrator | 2026-02-20 05:23:33.925359 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-20 05:23:33.925370 | orchestrator | Friday 20 February 2026 05:23:08 +0000 (0:00:01.208) 0:27:15.946 ******* 2026-02-20 05:23:33.925376 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925382 | orchestrator | 2026-02-20 05:23:33.925389 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-20 05:23:33.925395 | orchestrator | Friday 20 February 2026 05:23:09 +0000 (0:00:01.094) 0:27:17.041 ******* 2026-02-20 05:23:33.925419 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925426 | orchestrator | 2026-02-20 05:23:33.925435 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-20 05:23:33.925444 | orchestrator | Friday 20 February 2026 05:23:10 +0000 (0:00:01.159) 0:27:18.200 ******* 2026-02-20 05:23:33.925469 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 05:23:33.925479 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 05:23:33.925488 | orchestrator | 2026-02-20 05:23:33.925499 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-20 05:23:33.925509 | orchestrator | Friday 20 February 2026 05:23:12 +0000 (0:00:01.884) 0:27:20.085 ******* 2026-02-20 05:23:33.925518 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:23:33.925528 | orchestrator | 2026-02-20 05:23:33.925538 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-20 05:23:33.925548 | orchestrator | Friday 20 February 2026 05:23:14 +0000 (0:00:01.503) 0:27:21.588 ******* 2026-02-20 05:23:33.925556 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925563 | orchestrator | 2026-02-20 05:23:33.925569 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-20 05:23:33.925576 | orchestrator | Friday 20 February 2026 05:23:15 +0000 (0:00:01.128) 0:27:22.717 ******* 2026-02-20 05:23:33.925582 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925588 | orchestrator | 2026-02-20 05:23:33.925595 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 05:23:33.925602 | orchestrator | Friday 20 February 2026 05:23:15 +0000 (0:00:00.768) 0:27:23.485 ******* 2026-02-20 05:23:33.925608 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925615 | orchestrator | 2026-02-20 05:23:33.925621 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 05:23:33.925628 | orchestrator | Friday 20 February 2026 05:23:16 +0000 (0:00:00.766) 0:27:24.251 ******* 2026-02-20 05:23:33.925634 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-20 05:23:33.925640 | orchestrator | 2026-02-20 05:23:33.925647 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-20 05:23:33.925653 | orchestrator | Friday 20 February 2026 05:23:17 +0000 (0:00:01.094) 0:27:25.346 ******* 2026-02-20 05:23:33.925659 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:23:33.925664 | orchestrator | 2026-02-20 05:23:33.925670 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-20 05:23:33.925675 | orchestrator | Friday 20 February 2026 05:23:19 +0000 (0:00:01.848) 0:27:27.195 ******* 2026-02-20 05:23:33.925681 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 05:23:33.925686 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 05:23:33.925692 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 05:23:33.925697 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925703 | orchestrator | 2026-02-20 05:23:33.925708 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-20 05:23:33.925714 | orchestrator | Friday 20 February 2026 05:23:20 +0000 (0:00:01.178) 0:27:28.373 ******* 2026-02-20 05:23:33.925719 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925724 | orchestrator | 2026-02-20 05:23:33.925730 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-20 05:23:33.925735 | orchestrator | Friday 20 February 2026 05:23:21 +0000 (0:00:01.106) 0:27:29.480 ******* 2026-02-20 05:23:33.925741 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925748 | orchestrator | 2026-02-20 05:23:33.925757 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-20 05:23:33.925765 | orchestrator | Friday 20 February 2026 05:23:23 +0000 (0:00:01.154) 0:27:30.635 ******* 2026-02-20 05:23:33.925780 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925788 | orchestrator | 2026-02-20 05:23:33.925796 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-20 05:23:33.925804 | orchestrator | Friday 20 February 2026 05:23:24 +0000 (0:00:01.135) 0:27:31.771 ******* 2026-02-20 05:23:33.925814 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925822 | orchestrator | 2026-02-20 05:23:33.925831 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-20 05:23:33.925839 | orchestrator | Friday 20 February 2026 05:23:25 +0000 (0:00:01.121) 0:27:32.892 ******* 2026-02-20 05:23:33.925848 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.925857 | orchestrator | 2026-02-20 05:23:33.925866 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 05:23:33.925875 | orchestrator | Friday 20 February 2026 05:23:26 +0000 (0:00:00.801) 0:27:33.693 ******* 2026-02-20 05:23:33.925883 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:23:33.925892 | orchestrator | 2026-02-20 05:23:33.925902 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 05:23:33.925911 | orchestrator | Friday 20 February 2026 05:23:28 +0000 (0:00:02.239) 0:27:35.933 ******* 2026-02-20 05:23:33.925964 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:23:33.925971 | orchestrator | 2026-02-20 05:23:33.925977 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 05:23:33.925982 | orchestrator | Friday 20 February 2026 05:23:29 +0000 (0:00:00.763) 0:27:36.696 ******* 2026-02-20 05:23:33.925988 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-20 05:23:33.925993 | orchestrator | 2026-02-20 05:23:33.925999 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-20 05:23:33.926004 | orchestrator | Friday 20 February 2026 05:23:30 +0000 (0:00:01.196) 0:27:37.893 ******* 2026-02-20 05:23:33.926077 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.926091 | orchestrator | 2026-02-20 05:23:33.926099 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-20 05:23:33.926108 | orchestrator | Friday 20 February 2026 05:23:31 +0000 (0:00:01.113) 0:27:39.006 ******* 2026-02-20 05:23:33.926124 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.926134 | orchestrator | 2026-02-20 05:23:33.926143 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-20 05:23:33.926152 | orchestrator | Friday 20 February 2026 05:23:32 +0000 (0:00:01.150) 0:27:40.156 ******* 2026-02-20 05:23:33.926161 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:23:33.926169 | orchestrator | 2026-02-20 05:23:33.926184 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-20 05:24:07.468270 | orchestrator | Friday 20 February 2026 05:23:33 +0000 (0:00:01.236) 0:27:41.393 ******* 2026-02-20 05:24:07.468366 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468378 | orchestrator | 2026-02-20 05:24:07.468385 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-20 05:24:07.468392 | orchestrator | Friday 20 February 2026 05:23:35 +0000 (0:00:01.136) 0:27:42.530 ******* 2026-02-20 05:24:07.468398 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468404 | orchestrator | 2026-02-20 05:24:07.468410 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-20 05:24:07.468416 | orchestrator | Friday 20 February 2026 05:23:36 +0000 (0:00:01.121) 0:27:43.651 ******* 2026-02-20 05:24:07.468423 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468430 | orchestrator | 2026-02-20 05:24:07.468436 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-20 05:24:07.468443 | orchestrator | Friday 20 February 2026 05:23:37 +0000 (0:00:01.137) 0:27:44.788 ******* 2026-02-20 05:24:07.468449 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468455 | orchestrator | 2026-02-20 05:24:07.468463 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-20 05:24:07.468503 | orchestrator | Friday 20 February 2026 05:23:38 +0000 (0:00:01.112) 0:27:45.901 ******* 2026-02-20 05:24:07.468510 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468517 | orchestrator | 2026-02-20 05:24:07.468524 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-20 05:24:07.468531 | orchestrator | Friday 20 February 2026 05:23:39 +0000 (0:00:01.122) 0:27:47.024 ******* 2026-02-20 05:24:07.468538 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:24:07.468546 | orchestrator | 2026-02-20 05:24:07.468553 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 05:24:07.468559 | orchestrator | Friday 20 February 2026 05:23:40 +0000 (0:00:00.786) 0:27:47.810 ******* 2026-02-20 05:24:07.468566 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-20 05:24:07.468573 | orchestrator | 2026-02-20 05:24:07.468580 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-20 05:24:07.468589 | orchestrator | Friday 20 February 2026 05:23:41 +0000 (0:00:01.093) 0:27:48.904 ******* 2026-02-20 05:24:07.468595 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-20 05:24:07.468602 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-20 05:24:07.468608 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-20 05:24:07.468614 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-20 05:24:07.468620 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-20 05:24:07.468627 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-20 05:24:07.468634 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-20 05:24:07.468639 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-20 05:24:07.468643 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 05:24:07.468648 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 05:24:07.468652 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 05:24:07.468655 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 05:24:07.468659 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 05:24:07.468663 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 05:24:07.468667 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-20 05:24:07.468671 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-20 05:24:07.468675 | orchestrator | 2026-02-20 05:24:07.468679 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 05:24:07.468683 | orchestrator | Friday 20 February 2026 05:23:47 +0000 (0:00:06.545) 0:27:55.449 ******* 2026-02-20 05:24:07.468687 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468690 | orchestrator | 2026-02-20 05:24:07.468694 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 05:24:07.468698 | orchestrator | Friday 20 February 2026 05:23:48 +0000 (0:00:00.777) 0:27:56.227 ******* 2026-02-20 05:24:07.468702 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468706 | orchestrator | 2026-02-20 05:24:07.468709 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 05:24:07.468713 | orchestrator | Friday 20 February 2026 05:23:49 +0000 (0:00:00.757) 0:27:56.984 ******* 2026-02-20 05:24:07.468717 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468721 | orchestrator | 2026-02-20 05:24:07.468725 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 05:24:07.468729 | orchestrator | Friday 20 February 2026 05:23:50 +0000 (0:00:00.760) 0:27:57.745 ******* 2026-02-20 05:24:07.468733 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468737 | orchestrator | 2026-02-20 05:24:07.468740 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 05:24:07.468744 | orchestrator | Friday 20 February 2026 05:23:51 +0000 (0:00:00.776) 0:27:58.522 ******* 2026-02-20 05:24:07.468760 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468764 | orchestrator | 2026-02-20 05:24:07.468768 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 05:24:07.468772 | orchestrator | Friday 20 February 2026 05:23:51 +0000 (0:00:00.747) 0:27:59.270 ******* 2026-02-20 05:24:07.468776 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468780 | orchestrator | 2026-02-20 05:24:07.468796 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 05:24:07.468800 | orchestrator | Friday 20 February 2026 05:23:52 +0000 (0:00:00.755) 0:28:00.025 ******* 2026-02-20 05:24:07.468804 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468808 | orchestrator | 2026-02-20 05:24:07.468824 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 05:24:07.468829 | orchestrator | Friday 20 February 2026 05:23:53 +0000 (0:00:00.757) 0:28:00.783 ******* 2026-02-20 05:24:07.468833 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468838 | orchestrator | 2026-02-20 05:24:07.468842 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 05:24:07.468847 | orchestrator | Friday 20 February 2026 05:23:54 +0000 (0:00:00.776) 0:28:01.559 ******* 2026-02-20 05:24:07.468851 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468856 | orchestrator | 2026-02-20 05:24:07.468860 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 05:24:07.468865 | orchestrator | Friday 20 February 2026 05:23:54 +0000 (0:00:00.786) 0:28:02.346 ******* 2026-02-20 05:24:07.468873 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468878 | orchestrator | 2026-02-20 05:24:07.468882 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 05:24:07.468887 | orchestrator | Friday 20 February 2026 05:23:55 +0000 (0:00:00.779) 0:28:03.126 ******* 2026-02-20 05:24:07.468891 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468895 | orchestrator | 2026-02-20 05:24:07.468900 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 05:24:07.468905 | orchestrator | Friday 20 February 2026 05:23:56 +0000 (0:00:00.791) 0:28:03.917 ******* 2026-02-20 05:24:07.468909 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468914 | orchestrator | 2026-02-20 05:24:07.468918 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 05:24:07.468923 | orchestrator | Friday 20 February 2026 05:23:57 +0000 (0:00:00.762) 0:28:04.680 ******* 2026-02-20 05:24:07.468927 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468949 | orchestrator | 2026-02-20 05:24:07.468954 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 05:24:07.468958 | orchestrator | Friday 20 February 2026 05:23:58 +0000 (0:00:00.850) 0:28:05.530 ******* 2026-02-20 05:24:07.468963 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468967 | orchestrator | 2026-02-20 05:24:07.468972 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 05:24:07.468976 | orchestrator | Friday 20 February 2026 05:23:58 +0000 (0:00:00.751) 0:28:06.282 ******* 2026-02-20 05:24:07.468981 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.468985 | orchestrator | 2026-02-20 05:24:07.468989 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 05:24:07.468994 | orchestrator | Friday 20 February 2026 05:23:59 +0000 (0:00:00.851) 0:28:07.133 ******* 2026-02-20 05:24:07.468998 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.469003 | orchestrator | 2026-02-20 05:24:07.469007 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 05:24:07.469012 | orchestrator | Friday 20 February 2026 05:24:00 +0000 (0:00:00.773) 0:28:07.907 ******* 2026-02-20 05:24:07.469017 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.469021 | orchestrator | 2026-02-20 05:24:07.469025 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:24:07.469037 | orchestrator | Friday 20 February 2026 05:24:01 +0000 (0:00:00.758) 0:28:08.666 ******* 2026-02-20 05:24:07.469042 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.469046 | orchestrator | 2026-02-20 05:24:07.469051 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:24:07.469055 | orchestrator | Friday 20 February 2026 05:24:01 +0000 (0:00:00.795) 0:28:09.462 ******* 2026-02-20 05:24:07.469060 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.469064 | orchestrator | 2026-02-20 05:24:07.469068 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:24:07.469073 | orchestrator | Friday 20 February 2026 05:24:02 +0000 (0:00:00.765) 0:28:10.227 ******* 2026-02-20 05:24:07.469077 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.469082 | orchestrator | 2026-02-20 05:24:07.469088 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:24:07.469094 | orchestrator | Friday 20 February 2026 05:24:03 +0000 (0:00:00.771) 0:28:10.999 ******* 2026-02-20 05:24:07.469101 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.469107 | orchestrator | 2026-02-20 05:24:07.469114 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:24:07.469121 | orchestrator | Friday 20 February 2026 05:24:04 +0000 (0:00:00.767) 0:28:11.767 ******* 2026-02-20 05:24:07.469127 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-20 05:24:07.469133 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-20 05:24:07.469140 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-20 05:24:07.469146 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.469151 | orchestrator | 2026-02-20 05:24:07.469156 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:24:07.469164 | orchestrator | Friday 20 February 2026 05:24:05 +0000 (0:00:01.059) 0:28:12.826 ******* 2026-02-20 05:24:07.469170 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-20 05:24:07.469177 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-20 05:24:07.469184 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-20 05:24:07.469190 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.469197 | orchestrator | 2026-02-20 05:24:07.469203 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:24:07.469215 | orchestrator | Friday 20 February 2026 05:24:06 +0000 (0:00:01.050) 0:28:13.876 ******* 2026-02-20 05:24:07.469222 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-20 05:24:07.469229 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-20 05:24:07.469235 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-20 05:24:07.469239 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:24:07.469243 | orchestrator | 2026-02-20 05:24:07.469251 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:25:07.713272 | orchestrator | Friday 20 February 2026 05:24:07 +0000 (0:00:01.061) 0:28:14.937 ******* 2026-02-20 05:25:07.713422 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:25:07.713443 | orchestrator | 2026-02-20 05:25:07.713456 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:25:07.713468 | orchestrator | Friday 20 February 2026 05:24:08 +0000 (0:00:00.769) 0:28:15.707 ******* 2026-02-20 05:25:07.713481 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-20 05:25:07.713493 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:25:07.713504 | orchestrator | 2026-02-20 05:25:07.713518 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 05:25:07.713531 | orchestrator | Friday 20 February 2026 05:24:09 +0000 (0:00:00.886) 0:28:16.593 ******* 2026-02-20 05:25:07.713542 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:25:07.713554 | orchestrator | 2026-02-20 05:25:07.713565 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-20 05:25:07.713605 | orchestrator | Friday 20 February 2026 05:24:10 +0000 (0:00:01.463) 0:28:18.057 ******* 2026-02-20 05:25:07.713617 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:25:07.713629 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-20 05:25:07.713641 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:25:07.713653 | orchestrator | 2026-02-20 05:25:07.713664 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-20 05:25:07.713676 | orchestrator | Friday 20 February 2026 05:24:12 +0000 (0:00:01.566) 0:28:19.623 ******* 2026-02-20 05:25:07.713688 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-02-20 05:25:07.713699 | orchestrator | 2026-02-20 05:25:07.713711 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-20 05:25:07.713722 | orchestrator | Friday 20 February 2026 05:24:13 +0000 (0:00:01.084) 0:28:20.708 ******* 2026-02-20 05:25:07.713734 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:25:07.713746 | orchestrator | 2026-02-20 05:25:07.713757 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-20 05:25:07.713770 | orchestrator | Friday 20 February 2026 05:24:14 +0000 (0:00:01.513) 0:28:22.221 ******* 2026-02-20 05:25:07.713781 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:25:07.713795 | orchestrator | 2026-02-20 05:25:07.713808 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-20 05:25:07.713821 | orchestrator | Friday 20 February 2026 05:24:15 +0000 (0:00:01.122) 0:28:23.344 ******* 2026-02-20 05:25:07.713834 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 05:25:07.713848 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 05:25:07.713862 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 05:25:07.713874 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-02-20 05:25:07.713886 | orchestrator | 2026-02-20 05:25:07.713898 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-20 05:25:07.713909 | orchestrator | Friday 20 February 2026 05:24:23 +0000 (0:00:07.380) 0:28:30.725 ******* 2026-02-20 05:25:07.713920 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:25:07.713932 | orchestrator | 2026-02-20 05:25:07.713944 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-20 05:25:07.714073 | orchestrator | Friday 20 February 2026 05:24:24 +0000 (0:00:01.139) 0:28:31.865 ******* 2026-02-20 05:25:07.714095 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-20 05:25:07.714107 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-20 05:25:07.714120 | orchestrator | 2026-02-20 05:25:07.714132 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-20 05:25:07.714144 | orchestrator | Friday 20 February 2026 05:24:27 +0000 (0:00:03.381) 0:28:35.247 ******* 2026-02-20 05:25:07.714156 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-20 05:25:07.714168 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-20 05:25:07.714180 | orchestrator | 2026-02-20 05:25:07.714192 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-20 05:25:07.714203 | orchestrator | Friday 20 February 2026 05:24:29 +0000 (0:00:02.079) 0:28:37.326 ******* 2026-02-20 05:25:07.714215 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:25:07.714227 | orchestrator | 2026-02-20 05:25:07.714239 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-20 05:25:07.714250 | orchestrator | Friday 20 February 2026 05:24:31 +0000 (0:00:01.468) 0:28:38.795 ******* 2026-02-20 05:25:07.714261 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:25:07.714273 | orchestrator | 2026-02-20 05:25:07.714285 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-20 05:25:07.714311 | orchestrator | Friday 20 February 2026 05:24:32 +0000 (0:00:00.768) 0:28:39.563 ******* 2026-02-20 05:25:07.714324 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:25:07.714336 | orchestrator | 2026-02-20 05:25:07.714348 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-20 05:25:07.714359 | orchestrator | Friday 20 February 2026 05:24:32 +0000 (0:00:00.746) 0:28:40.310 ******* 2026-02-20 05:25:07.714370 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-02-20 05:25:07.714382 | orchestrator | 2026-02-20 05:25:07.714394 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-20 05:25:07.714422 | orchestrator | Friday 20 February 2026 05:24:33 +0000 (0:00:01.150) 0:28:41.460 ******* 2026-02-20 05:25:07.714434 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:25:07.714446 | orchestrator | 2026-02-20 05:25:07.714458 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-20 05:25:07.714470 | orchestrator | Friday 20 February 2026 05:24:35 +0000 (0:00:01.133) 0:28:42.594 ******* 2026-02-20 05:25:07.714482 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:25:07.714494 | orchestrator | 2026-02-20 05:25:07.714530 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-20 05:25:07.714542 | orchestrator | Friday 20 February 2026 05:24:36 +0000 (0:00:01.198) 0:28:43.792 ******* 2026-02-20 05:25:07.714553 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-02-20 05:25:07.714564 | orchestrator | 2026-02-20 05:25:07.714575 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-20 05:25:07.714586 | orchestrator | Friday 20 February 2026 05:24:37 +0000 (0:00:01.306) 0:28:45.099 ******* 2026-02-20 05:25:07.714597 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:25:07.714608 | orchestrator | 2026-02-20 05:25:07.714621 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-20 05:25:07.714632 | orchestrator | Friday 20 February 2026 05:24:39 +0000 (0:00:02.005) 0:28:47.104 ******* 2026-02-20 05:25:07.714644 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:25:07.714656 | orchestrator | 2026-02-20 05:25:07.714686 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-20 05:25:07.714699 | orchestrator | Friday 20 February 2026 05:24:41 +0000 (0:00:01.944) 0:28:49.049 ******* 2026-02-20 05:25:07.714721 | orchestrator | ok: [testbed-node-1] 2026-02-20 05:25:07.714733 | orchestrator | 2026-02-20 05:25:07.714744 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-20 05:25:07.714754 | orchestrator | Friday 20 February 2026 05:24:44 +0000 (0:00:02.545) 0:28:51.595 ******* 2026-02-20 05:25:07.714765 | orchestrator | changed: [testbed-node-1] 2026-02-20 05:25:07.714778 | orchestrator | 2026-02-20 05:25:07.714789 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-20 05:25:07.714801 | orchestrator | Friday 20 February 2026 05:24:47 +0000 (0:00:03.585) 0:28:55.180 ******* 2026-02-20 05:25:07.714814 | orchestrator | skipping: [testbed-node-1] 2026-02-20 05:25:07.714827 | orchestrator | 2026-02-20 05:25:07.714839 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-20 05:25:07.714852 | orchestrator | 2026-02-20 05:25:07.714864 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-20 05:25:07.714876 | orchestrator | Friday 20 February 2026 05:24:48 +0000 (0:00:01.024) 0:28:56.205 ******* 2026-02-20 05:25:07.714889 | orchestrator | changed: [testbed-node-2] 2026-02-20 05:25:07.714902 | orchestrator | 2026-02-20 05:25:07.714913 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-20 05:25:07.714924 | orchestrator | Friday 20 February 2026 05:24:51 +0000 (0:00:02.659) 0:28:58.865 ******* 2026-02-20 05:25:07.714936 | orchestrator | changed: [testbed-node-2] 2026-02-20 05:25:07.714948 | orchestrator | 2026-02-20 05:25:07.714983 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:25:07.714996 | orchestrator | Friday 20 February 2026 05:24:53 +0000 (0:00:02.221) 0:29:01.086 ******* 2026-02-20 05:25:07.715019 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-20 05:25:07.715031 | orchestrator | 2026-02-20 05:25:07.715042 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 05:25:07.715053 | orchestrator | Friday 20 February 2026 05:24:54 +0000 (0:00:01.096) 0:29:02.182 ******* 2026-02-20 05:25:07.715064 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:25:07.715076 | orchestrator | 2026-02-20 05:25:07.715088 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 05:25:07.715101 | orchestrator | Friday 20 February 2026 05:24:56 +0000 (0:00:01.534) 0:29:03.717 ******* 2026-02-20 05:25:07.715112 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:25:07.715123 | orchestrator | 2026-02-20 05:25:07.715135 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:25:07.715147 | orchestrator | Friday 20 February 2026 05:24:57 +0000 (0:00:01.134) 0:29:04.851 ******* 2026-02-20 05:25:07.715160 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:25:07.715172 | orchestrator | 2026-02-20 05:25:07.715185 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:25:07.715195 | orchestrator | Friday 20 February 2026 05:24:58 +0000 (0:00:01.446) 0:29:06.298 ******* 2026-02-20 05:25:07.715206 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:25:07.715218 | orchestrator | 2026-02-20 05:25:07.715229 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 05:25:07.715240 | orchestrator | Friday 20 February 2026 05:24:59 +0000 (0:00:01.123) 0:29:07.422 ******* 2026-02-20 05:25:07.715250 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:25:07.715261 | orchestrator | 2026-02-20 05:25:07.715273 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 05:25:07.715284 | orchestrator | Friday 20 February 2026 05:25:01 +0000 (0:00:01.154) 0:29:08.577 ******* 2026-02-20 05:25:07.715295 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:25:07.715306 | orchestrator | 2026-02-20 05:25:07.715317 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 05:25:07.715331 | orchestrator | Friday 20 February 2026 05:25:02 +0000 (0:00:01.140) 0:29:09.717 ******* 2026-02-20 05:25:07.715343 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:07.715356 | orchestrator | 2026-02-20 05:25:07.715368 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 05:25:07.715380 | orchestrator | Friday 20 February 2026 05:25:03 +0000 (0:00:01.144) 0:29:10.862 ******* 2026-02-20 05:25:07.715391 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:25:07.715403 | orchestrator | 2026-02-20 05:25:07.715416 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 05:25:07.715427 | orchestrator | Friday 20 February 2026 05:25:04 +0000 (0:00:01.102) 0:29:11.965 ******* 2026-02-20 05:25:07.715439 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:25:07.715461 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:25:07.715475 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-20 05:25:07.715486 | orchestrator | 2026-02-20 05:25:07.715500 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 05:25:07.715512 | orchestrator | Friday 20 February 2026 05:25:06 +0000 (0:00:01.976) 0:29:13.942 ******* 2026-02-20 05:25:07.715541 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:25:31.520401 | orchestrator | 2026-02-20 05:25:31.520523 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 05:25:31.520543 | orchestrator | Friday 20 February 2026 05:25:07 +0000 (0:00:01.242) 0:29:15.184 ******* 2026-02-20 05:25:31.520556 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:25:31.520568 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:25:31.520582 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-20 05:25:31.520621 | orchestrator | 2026-02-20 05:25:31.520633 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 05:25:31.520645 | orchestrator | Friday 20 February 2026 05:25:10 +0000 (0:00:03.258) 0:29:18.443 ******* 2026-02-20 05:25:31.520658 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-20 05:25:31.520669 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-20 05:25:31.520681 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-20 05:25:31.520692 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:31.520704 | orchestrator | 2026-02-20 05:25:31.520715 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 05:25:31.520726 | orchestrator | Friday 20 February 2026 05:25:12 +0000 (0:00:01.439) 0:29:19.883 ******* 2026-02-20 05:25:31.520739 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 05:25:31.520753 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 05:25:31.520765 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 05:25:31.520777 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:31.520788 | orchestrator | 2026-02-20 05:25:31.520799 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 05:25:31.520811 | orchestrator | Friday 20 February 2026 05:25:14 +0000 (0:00:01.937) 0:29:21.820 ******* 2026-02-20 05:25:31.520825 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:25:31.520840 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:25:31.520851 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:25:31.520862 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:31.520874 | orchestrator | 2026-02-20 05:25:31.520885 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 05:25:31.520897 | orchestrator | Friday 20 February 2026 05:25:15 +0000 (0:00:01.178) 0:29:22.999 ******* 2026-02-20 05:25:31.520948 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'fa7fdf54d9d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 05:25:08.392464', 'end': '2026-02-20 05:25:08.443743', 'delta': '0:00:00.051279', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fa7fdf54d9d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 05:25:31.521002 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '9edb2d04dfb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 05:25:08.939581', 'end': '2026-02-20 05:25:08.985563', 'delta': '0:00:00.045982', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9edb2d04dfb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 05:25:31.521016 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '18841505eff4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 05:25:09.750721', 'end': '2026-02-20 05:25:09.791377', 'delta': '0:00:00.040656', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['18841505eff4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 05:25:31.521029 | orchestrator | 2026-02-20 05:25:31.521043 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 05:25:31.521056 | orchestrator | Friday 20 February 2026 05:25:16 +0000 (0:00:01.204) 0:29:24.204 ******* 2026-02-20 05:25:31.521069 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:25:31.521082 | orchestrator | 2026-02-20 05:25:31.521094 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 05:25:31.521107 | orchestrator | Friday 20 February 2026 05:25:17 +0000 (0:00:01.264) 0:29:25.468 ******* 2026-02-20 05:25:31.521120 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:31.521132 | orchestrator | 2026-02-20 05:25:31.521145 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 05:25:31.521158 | orchestrator | Friday 20 February 2026 05:25:19 +0000 (0:00:01.262) 0:29:26.730 ******* 2026-02-20 05:25:31.521171 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:25:31.521183 | orchestrator | 2026-02-20 05:25:31.521196 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 05:25:31.521208 | orchestrator | Friday 20 February 2026 05:25:20 +0000 (0:00:01.206) 0:29:27.937 ******* 2026-02-20 05:25:31.521221 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:25:31.521234 | orchestrator | 2026-02-20 05:25:31.521246 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:25:31.521259 | orchestrator | Friday 20 February 2026 05:25:22 +0000 (0:00:01.996) 0:29:29.933 ******* 2026-02-20 05:25:31.521271 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:25:31.521281 | orchestrator | 2026-02-20 05:25:31.521292 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 05:25:31.521303 | orchestrator | Friday 20 February 2026 05:25:23 +0000 (0:00:01.134) 0:29:31.068 ******* 2026-02-20 05:25:31.521314 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:31.521325 | orchestrator | 2026-02-20 05:25:31.521336 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 05:25:31.521347 | orchestrator | Friday 20 February 2026 05:25:24 +0000 (0:00:01.096) 0:29:32.164 ******* 2026-02-20 05:25:31.521358 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:31.521375 | orchestrator | 2026-02-20 05:25:31.521387 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:25:31.521398 | orchestrator | Friday 20 February 2026 05:25:25 +0000 (0:00:01.206) 0:29:33.370 ******* 2026-02-20 05:25:31.521409 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:31.521420 | orchestrator | 2026-02-20 05:25:31.521431 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 05:25:31.521442 | orchestrator | Friday 20 February 2026 05:25:27 +0000 (0:00:01.142) 0:29:34.512 ******* 2026-02-20 05:25:31.521453 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:31.521464 | orchestrator | 2026-02-20 05:25:31.521475 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 05:25:31.521486 | orchestrator | Friday 20 February 2026 05:25:28 +0000 (0:00:01.131) 0:29:35.644 ******* 2026-02-20 05:25:31.521497 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:31.521508 | orchestrator | 2026-02-20 05:25:31.521519 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 05:25:31.521530 | orchestrator | Friday 20 February 2026 05:25:29 +0000 (0:00:01.113) 0:29:36.757 ******* 2026-02-20 05:25:31.521541 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:31.521552 | orchestrator | 2026-02-20 05:25:31.521568 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 05:25:31.521580 | orchestrator | Friday 20 February 2026 05:25:30 +0000 (0:00:01.138) 0:29:37.896 ******* 2026-02-20 05:25:31.521591 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:31.521601 | orchestrator | 2026-02-20 05:25:31.521613 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 05:25:31.521631 | orchestrator | Friday 20 February 2026 05:25:31 +0000 (0:00:01.091) 0:29:38.988 ******* 2026-02-20 05:25:36.216603 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:36.216719 | orchestrator | 2026-02-20 05:25:36.216736 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 05:25:36.216748 | orchestrator | Friday 20 February 2026 05:25:32 +0000 (0:00:01.108) 0:29:40.096 ******* 2026-02-20 05:25:36.216761 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:36.216773 | orchestrator | 2026-02-20 05:25:36.216784 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 05:25:36.216796 | orchestrator | Friday 20 February 2026 05:25:33 +0000 (0:00:01.108) 0:29:41.205 ******* 2026-02-20 05:25:36.216809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:25:36.216824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:25:36.216835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:25:36.216849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 05:25:36.216890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:25:36.216903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:25:36.216914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:25:36.217051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3bf70d99', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part16', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part14', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part15', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part1', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:25:36.217074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:25:36.217097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:25:36.217111 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:36.217124 | orchestrator | 2026-02-20 05:25:36.217138 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 05:25:36.217151 | orchestrator | Friday 20 February 2026 05:25:34 +0000 (0:00:01.218) 0:29:42.424 ******* 2026-02-20 05:25:36.217165 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:25:36.217184 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:25:36.217206 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:25:47.338415 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:25:47.338527 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:25:47.338562 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:25:47.338574 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:25:47.338621 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3bf70d99', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part16', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part14', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part15', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part1', 'scsi-SQEMU_QEMU_HARDDISK_3bf70d99-e61c-4837-83b3-53782c1e170c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:25:47.338635 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:25:47.338653 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:25:47.338664 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:47.338676 | orchestrator | 2026-02-20 05:25:47.338687 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 05:25:47.338699 | orchestrator | Friday 20 February 2026 05:25:36 +0000 (0:00:01.268) 0:29:43.693 ******* 2026-02-20 05:25:47.338709 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:25:47.338719 | orchestrator | 2026-02-20 05:25:47.338729 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 05:25:47.338739 | orchestrator | Friday 20 February 2026 05:25:37 +0000 (0:00:01.651) 0:29:45.345 ******* 2026-02-20 05:25:47.338749 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:25:47.338758 | orchestrator | 2026-02-20 05:25:47.338768 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:25:47.338778 | orchestrator | Friday 20 February 2026 05:25:39 +0000 (0:00:01.164) 0:29:46.509 ******* 2026-02-20 05:25:47.338788 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:25:47.338797 | orchestrator | 2026-02-20 05:25:47.338807 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:25:47.338817 | orchestrator | Friday 20 February 2026 05:25:40 +0000 (0:00:01.639) 0:29:48.149 ******* 2026-02-20 05:25:47.338826 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:47.338836 | orchestrator | 2026-02-20 05:25:47.338846 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:25:47.338855 | orchestrator | Friday 20 February 2026 05:25:41 +0000 (0:00:01.160) 0:29:49.309 ******* 2026-02-20 05:25:47.338865 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:47.338875 | orchestrator | 2026-02-20 05:25:47.338884 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:25:47.338894 | orchestrator | Friday 20 February 2026 05:25:43 +0000 (0:00:01.226) 0:29:50.536 ******* 2026-02-20 05:25:47.338904 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:47.338914 | orchestrator | 2026-02-20 05:25:47.338929 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 05:25:47.338941 | orchestrator | Friday 20 February 2026 05:25:44 +0000 (0:00:01.135) 0:29:51.671 ******* 2026-02-20 05:25:47.338952 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-20 05:25:47.338964 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-20 05:25:47.339006 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-20 05:25:47.339017 | orchestrator | 2026-02-20 05:25:47.339028 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 05:25:47.339039 | orchestrator | Friday 20 February 2026 05:25:46 +0000 (0:00:01.957) 0:29:53.629 ******* 2026-02-20 05:25:47.339050 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-20 05:25:47.339062 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-20 05:25:47.339080 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-20 05:25:47.339091 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:25:47.339103 | orchestrator | 2026-02-20 05:25:47.339121 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 05:26:23.014665 | orchestrator | Friday 20 February 2026 05:25:47 +0000 (0:00:01.177) 0:29:54.806 ******* 2026-02-20 05:26:23.014807 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.014832 | orchestrator | 2026-02-20 05:26:23.014852 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 05:26:23.014872 | orchestrator | Friday 20 February 2026 05:25:48 +0000 (0:00:01.128) 0:29:55.935 ******* 2026-02-20 05:26:23.014891 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:26:23.014911 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:26:23.014930 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-20 05:26:23.014949 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:26:23.014967 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:26:23.015054 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:26:23.015075 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:26:23.015093 | orchestrator | 2026-02-20 05:26:23.015113 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 05:26:23.015132 | orchestrator | Friday 20 February 2026 05:25:50 +0000 (0:00:02.053) 0:29:57.988 ******* 2026-02-20 05:26:23.015149 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:26:23.015169 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:26:23.015190 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-20 05:26:23.015211 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:26:23.015230 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:26:23.015250 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:26:23.015269 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:26:23.015286 | orchestrator | 2026-02-20 05:26:23.015306 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 05:26:23.015327 | orchestrator | Friday 20 February 2026 05:25:52 +0000 (0:00:02.105) 0:30:00.094 ******* 2026-02-20 05:26:23.015348 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-20 05:26:23.015369 | orchestrator | 2026-02-20 05:26:23.015389 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 05:26:23.015409 | orchestrator | Friday 20 February 2026 05:25:53 +0000 (0:00:01.098) 0:30:01.192 ******* 2026-02-20 05:26:23.015427 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-20 05:26:23.015444 | orchestrator | 2026-02-20 05:26:23.015462 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 05:26:23.015479 | orchestrator | Friday 20 February 2026 05:25:54 +0000 (0:00:01.122) 0:30:02.315 ******* 2026-02-20 05:26:23.015498 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:26:23.015517 | orchestrator | 2026-02-20 05:26:23.015535 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 05:26:23.015554 | orchestrator | Friday 20 February 2026 05:25:56 +0000 (0:00:01.546) 0:30:03.862 ******* 2026-02-20 05:26:23.015573 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.015592 | orchestrator | 2026-02-20 05:26:23.015610 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 05:26:23.015669 | orchestrator | Friday 20 February 2026 05:25:57 +0000 (0:00:01.105) 0:30:04.967 ******* 2026-02-20 05:26:23.015682 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.015692 | orchestrator | 2026-02-20 05:26:23.015704 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 05:26:23.015715 | orchestrator | Friday 20 February 2026 05:25:58 +0000 (0:00:01.131) 0:30:06.099 ******* 2026-02-20 05:26:23.015726 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.015737 | orchestrator | 2026-02-20 05:26:23.015748 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 05:26:23.015759 | orchestrator | Friday 20 February 2026 05:25:59 +0000 (0:00:01.125) 0:30:07.225 ******* 2026-02-20 05:26:23.015770 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:26:23.015781 | orchestrator | 2026-02-20 05:26:23.015792 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 05:26:23.015819 | orchestrator | Friday 20 February 2026 05:26:01 +0000 (0:00:01.594) 0:30:08.819 ******* 2026-02-20 05:26:23.015830 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.015841 | orchestrator | 2026-02-20 05:26:23.015852 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 05:26:23.015863 | orchestrator | Friday 20 February 2026 05:26:02 +0000 (0:00:01.100) 0:30:09.920 ******* 2026-02-20 05:26:23.015874 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.015885 | orchestrator | 2026-02-20 05:26:23.015896 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 05:26:23.015906 | orchestrator | Friday 20 February 2026 05:26:03 +0000 (0:00:01.108) 0:30:11.029 ******* 2026-02-20 05:26:23.015917 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:26:23.015928 | orchestrator | 2026-02-20 05:26:23.015939 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 05:26:23.015949 | orchestrator | Friday 20 February 2026 05:26:05 +0000 (0:00:01.593) 0:30:12.623 ******* 2026-02-20 05:26:23.015960 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:26:23.015971 | orchestrator | 2026-02-20 05:26:23.016084 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 05:26:23.016134 | orchestrator | Friday 20 February 2026 05:26:06 +0000 (0:00:01.534) 0:30:14.158 ******* 2026-02-20 05:26:23.016153 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.016171 | orchestrator | 2026-02-20 05:26:23.016189 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 05:26:23.016206 | orchestrator | Friday 20 February 2026 05:26:07 +0000 (0:00:00.794) 0:30:14.952 ******* 2026-02-20 05:26:23.016222 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:26:23.016240 | orchestrator | 2026-02-20 05:26:23.016257 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 05:26:23.016274 | orchestrator | Friday 20 February 2026 05:26:08 +0000 (0:00:00.787) 0:30:15.739 ******* 2026-02-20 05:26:23.016293 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.016312 | orchestrator | 2026-02-20 05:26:23.016331 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 05:26:23.016347 | orchestrator | Friday 20 February 2026 05:26:09 +0000 (0:00:00.749) 0:30:16.489 ******* 2026-02-20 05:26:23.016364 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.016381 | orchestrator | 2026-02-20 05:26:23.016399 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 05:26:23.016417 | orchestrator | Friday 20 February 2026 05:26:09 +0000 (0:00:00.761) 0:30:17.250 ******* 2026-02-20 05:26:23.016436 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.016454 | orchestrator | 2026-02-20 05:26:23.016474 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 05:26:23.016493 | orchestrator | Friday 20 February 2026 05:26:10 +0000 (0:00:00.809) 0:30:18.060 ******* 2026-02-20 05:26:23.016511 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.016530 | orchestrator | 2026-02-20 05:26:23.016548 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 05:26:23.016584 | orchestrator | Friday 20 February 2026 05:26:11 +0000 (0:00:00.777) 0:30:18.837 ******* 2026-02-20 05:26:23.016601 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.016617 | orchestrator | 2026-02-20 05:26:23.016632 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 05:26:23.016649 | orchestrator | Friday 20 February 2026 05:26:12 +0000 (0:00:00.775) 0:30:19.613 ******* 2026-02-20 05:26:23.016666 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:26:23.016683 | orchestrator | 2026-02-20 05:26:23.016698 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 05:26:23.016715 | orchestrator | Friday 20 February 2026 05:26:12 +0000 (0:00:00.783) 0:30:20.396 ******* 2026-02-20 05:26:23.016732 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:26:23.016748 | orchestrator | 2026-02-20 05:26:23.016764 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 05:26:23.016782 | orchestrator | Friday 20 February 2026 05:26:13 +0000 (0:00:00.817) 0:30:21.214 ******* 2026-02-20 05:26:23.016799 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:26:23.016816 | orchestrator | 2026-02-20 05:26:23.016833 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 05:26:23.016850 | orchestrator | Friday 20 February 2026 05:26:14 +0000 (0:00:00.864) 0:30:22.079 ******* 2026-02-20 05:26:23.016866 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.016882 | orchestrator | 2026-02-20 05:26:23.016900 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 05:26:23.016916 | orchestrator | Friday 20 February 2026 05:26:15 +0000 (0:00:00.782) 0:30:22.861 ******* 2026-02-20 05:26:23.016932 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.016950 | orchestrator | 2026-02-20 05:26:23.016967 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 05:26:23.017035 | orchestrator | Friday 20 February 2026 05:26:16 +0000 (0:00:00.741) 0:30:23.603 ******* 2026-02-20 05:26:23.017046 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.017056 | orchestrator | 2026-02-20 05:26:23.017066 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 05:26:23.017075 | orchestrator | Friday 20 February 2026 05:26:16 +0000 (0:00:00.762) 0:30:24.365 ******* 2026-02-20 05:26:23.017085 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.017095 | orchestrator | 2026-02-20 05:26:23.017104 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 05:26:23.017114 | orchestrator | Friday 20 February 2026 05:26:17 +0000 (0:00:00.741) 0:30:25.107 ******* 2026-02-20 05:26:23.017124 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.017134 | orchestrator | 2026-02-20 05:26:23.017144 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 05:26:23.017153 | orchestrator | Friday 20 February 2026 05:26:18 +0000 (0:00:00.783) 0:30:25.891 ******* 2026-02-20 05:26:23.017163 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.017173 | orchestrator | 2026-02-20 05:26:23.017182 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 05:26:23.017192 | orchestrator | Friday 20 February 2026 05:26:19 +0000 (0:00:00.780) 0:30:26.672 ******* 2026-02-20 05:26:23.017202 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.017212 | orchestrator | 2026-02-20 05:26:23.017233 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 05:26:23.017244 | orchestrator | Friday 20 February 2026 05:26:19 +0000 (0:00:00.758) 0:30:27.430 ******* 2026-02-20 05:26:23.017253 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.017263 | orchestrator | 2026-02-20 05:26:23.017272 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 05:26:23.017282 | orchestrator | Friday 20 February 2026 05:26:20 +0000 (0:00:00.784) 0:30:28.215 ******* 2026-02-20 05:26:23.017292 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.017302 | orchestrator | 2026-02-20 05:26:23.017311 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 05:26:23.017330 | orchestrator | Friday 20 February 2026 05:26:21 +0000 (0:00:00.764) 0:30:28.979 ******* 2026-02-20 05:26:23.017340 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.017349 | orchestrator | 2026-02-20 05:26:23.017359 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 05:26:23.017369 | orchestrator | Friday 20 February 2026 05:26:22 +0000 (0:00:00.755) 0:30:29.734 ******* 2026-02-20 05:26:23.017378 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:26:23.017388 | orchestrator | 2026-02-20 05:26:23.017411 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 05:27:08.623129 | orchestrator | Friday 20 February 2026 05:26:23 +0000 (0:00:00.751) 0:30:30.486 ******* 2026-02-20 05:27:08.623212 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623221 | orchestrator | 2026-02-20 05:27:08.623227 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 05:27:08.623232 | orchestrator | Friday 20 February 2026 05:26:23 +0000 (0:00:00.752) 0:30:31.239 ******* 2026-02-20 05:27:08.623237 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:27:08.623243 | orchestrator | 2026-02-20 05:27:08.623249 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 05:27:08.623254 | orchestrator | Friday 20 February 2026 05:26:25 +0000 (0:00:01.733) 0:30:32.972 ******* 2026-02-20 05:27:08.623259 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:27:08.623264 | orchestrator | 2026-02-20 05:27:08.623269 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 05:27:08.623274 | orchestrator | Friday 20 February 2026 05:26:27 +0000 (0:00:02.174) 0:30:35.147 ******* 2026-02-20 05:27:08.623279 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-20 05:27:08.623285 | orchestrator | 2026-02-20 05:27:08.623290 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-20 05:27:08.623295 | orchestrator | Friday 20 February 2026 05:26:28 +0000 (0:00:01.105) 0:30:36.252 ******* 2026-02-20 05:27:08.623300 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623305 | orchestrator | 2026-02-20 05:27:08.623309 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-20 05:27:08.623314 | orchestrator | Friday 20 February 2026 05:26:29 +0000 (0:00:01.090) 0:30:37.342 ******* 2026-02-20 05:27:08.623319 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623324 | orchestrator | 2026-02-20 05:27:08.623329 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-20 05:27:08.623334 | orchestrator | Friday 20 February 2026 05:26:30 +0000 (0:00:01.122) 0:30:38.465 ******* 2026-02-20 05:27:08.623339 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 05:27:08.623344 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 05:27:08.623349 | orchestrator | 2026-02-20 05:27:08.623354 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-20 05:27:08.623359 | orchestrator | Friday 20 February 2026 05:26:32 +0000 (0:00:01.880) 0:30:40.346 ******* 2026-02-20 05:27:08.623364 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:27:08.623369 | orchestrator | 2026-02-20 05:27:08.623373 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-20 05:27:08.623379 | orchestrator | Friday 20 February 2026 05:26:34 +0000 (0:00:01.511) 0:30:41.858 ******* 2026-02-20 05:27:08.623384 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623389 | orchestrator | 2026-02-20 05:27:08.623394 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-20 05:27:08.623399 | orchestrator | Friday 20 February 2026 05:26:35 +0000 (0:00:01.121) 0:30:42.979 ******* 2026-02-20 05:27:08.623404 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623408 | orchestrator | 2026-02-20 05:27:08.623413 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 05:27:08.623418 | orchestrator | Friday 20 February 2026 05:26:36 +0000 (0:00:00.767) 0:30:43.747 ******* 2026-02-20 05:27:08.623441 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623447 | orchestrator | 2026-02-20 05:27:08.623452 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 05:27:08.623456 | orchestrator | Friday 20 February 2026 05:26:37 +0000 (0:00:00.782) 0:30:44.530 ******* 2026-02-20 05:27:08.623461 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-20 05:27:08.623466 | orchestrator | 2026-02-20 05:27:08.623471 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-20 05:27:08.623476 | orchestrator | Friday 20 February 2026 05:26:38 +0000 (0:00:01.106) 0:30:45.637 ******* 2026-02-20 05:27:08.623481 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:27:08.623486 | orchestrator | 2026-02-20 05:27:08.623490 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-20 05:27:08.623495 | orchestrator | Friday 20 February 2026 05:26:40 +0000 (0:00:01.954) 0:30:47.591 ******* 2026-02-20 05:27:08.623500 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 05:27:08.623505 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 05:27:08.623510 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 05:27:08.623515 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623519 | orchestrator | 2026-02-20 05:27:08.623535 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-20 05:27:08.623540 | orchestrator | Friday 20 February 2026 05:26:41 +0000 (0:00:01.172) 0:30:48.764 ******* 2026-02-20 05:27:08.623545 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623550 | orchestrator | 2026-02-20 05:27:08.623555 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-20 05:27:08.623560 | orchestrator | Friday 20 February 2026 05:26:42 +0000 (0:00:01.111) 0:30:49.875 ******* 2026-02-20 05:27:08.623565 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623570 | orchestrator | 2026-02-20 05:27:08.623574 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-20 05:27:08.623579 | orchestrator | Friday 20 February 2026 05:26:43 +0000 (0:00:01.156) 0:30:51.032 ******* 2026-02-20 05:27:08.623584 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623589 | orchestrator | 2026-02-20 05:27:08.623594 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-20 05:27:08.623598 | orchestrator | Friday 20 February 2026 05:26:44 +0000 (0:00:01.113) 0:30:52.146 ******* 2026-02-20 05:27:08.623603 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623608 | orchestrator | 2026-02-20 05:27:08.623623 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-20 05:27:08.623628 | orchestrator | Friday 20 February 2026 05:26:45 +0000 (0:00:01.112) 0:30:53.258 ******* 2026-02-20 05:27:08.623633 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623638 | orchestrator | 2026-02-20 05:27:08.623643 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 05:27:08.623647 | orchestrator | Friday 20 February 2026 05:26:46 +0000 (0:00:00.775) 0:30:54.033 ******* 2026-02-20 05:27:08.623652 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:27:08.623657 | orchestrator | 2026-02-20 05:27:08.623662 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 05:27:08.623667 | orchestrator | Friday 20 February 2026 05:26:48 +0000 (0:00:02.324) 0:30:56.358 ******* 2026-02-20 05:27:08.623671 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:27:08.623678 | orchestrator | 2026-02-20 05:27:08.623683 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 05:27:08.623689 | orchestrator | Friday 20 February 2026 05:26:49 +0000 (0:00:00.775) 0:30:57.134 ******* 2026-02-20 05:27:08.623694 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-20 05:27:08.623700 | orchestrator | 2026-02-20 05:27:08.623710 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-20 05:27:08.623716 | orchestrator | Friday 20 February 2026 05:26:50 +0000 (0:00:01.132) 0:30:58.266 ******* 2026-02-20 05:27:08.623721 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623727 | orchestrator | 2026-02-20 05:27:08.623733 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-20 05:27:08.623738 | orchestrator | Friday 20 February 2026 05:26:51 +0000 (0:00:01.141) 0:30:59.407 ******* 2026-02-20 05:27:08.623744 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623750 | orchestrator | 2026-02-20 05:27:08.623755 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-20 05:27:08.623761 | orchestrator | Friday 20 February 2026 05:26:53 +0000 (0:00:01.117) 0:31:00.525 ******* 2026-02-20 05:27:08.623767 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623772 | orchestrator | 2026-02-20 05:27:08.623778 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-20 05:27:08.623784 | orchestrator | Friday 20 February 2026 05:26:54 +0000 (0:00:01.143) 0:31:01.668 ******* 2026-02-20 05:27:08.623789 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623795 | orchestrator | 2026-02-20 05:27:08.623801 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-20 05:27:08.623806 | orchestrator | Friday 20 February 2026 05:26:55 +0000 (0:00:01.166) 0:31:02.835 ******* 2026-02-20 05:27:08.623812 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623818 | orchestrator | 2026-02-20 05:27:08.623823 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-20 05:27:08.623829 | orchestrator | Friday 20 February 2026 05:26:56 +0000 (0:00:01.111) 0:31:03.947 ******* 2026-02-20 05:27:08.623834 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623840 | orchestrator | 2026-02-20 05:27:08.623845 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-20 05:27:08.623851 | orchestrator | Friday 20 February 2026 05:26:57 +0000 (0:00:01.099) 0:31:05.046 ******* 2026-02-20 05:27:08.623857 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623862 | orchestrator | 2026-02-20 05:27:08.623868 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-20 05:27:08.623874 | orchestrator | Friday 20 February 2026 05:26:58 +0000 (0:00:01.129) 0:31:06.176 ******* 2026-02-20 05:27:08.623879 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:08.623885 | orchestrator | 2026-02-20 05:27:08.623891 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-20 05:27:08.623896 | orchestrator | Friday 20 February 2026 05:26:59 +0000 (0:00:01.114) 0:31:07.290 ******* 2026-02-20 05:27:08.623902 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:27:08.623907 | orchestrator | 2026-02-20 05:27:08.623913 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 05:27:08.623919 | orchestrator | Friday 20 February 2026 05:27:00 +0000 (0:00:00.844) 0:31:08.135 ******* 2026-02-20 05:27:08.623924 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-20 05:27:08.623930 | orchestrator | 2026-02-20 05:27:08.623936 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-20 05:27:08.623941 | orchestrator | Friday 20 February 2026 05:27:01 +0000 (0:00:01.096) 0:31:09.232 ******* 2026-02-20 05:27:08.623947 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-20 05:27:08.623953 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-20 05:27:08.623975 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-20 05:27:08.623981 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-20 05:27:08.624018 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-20 05:27:08.624024 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-20 05:27:08.624029 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-20 05:27:08.624035 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-20 05:27:08.624045 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 05:27:08.624050 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 05:27:08.624055 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 05:27:08.624060 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 05:27:08.624064 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 05:27:08.624069 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 05:27:08.624074 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-20 05:27:08.624079 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-20 05:27:08.624084 | orchestrator | 2026-02-20 05:27:08.624091 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 05:27:48.226573 | orchestrator | Friday 20 February 2026 05:27:08 +0000 (0:00:06.854) 0:31:16.086 ******* 2026-02-20 05:27:48.226678 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.226692 | orchestrator | 2026-02-20 05:27:48.226703 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 05:27:48.226712 | orchestrator | Friday 20 February 2026 05:27:09 +0000 (0:00:00.755) 0:31:16.842 ******* 2026-02-20 05:27:48.226721 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.226748 | orchestrator | 2026-02-20 05:27:48.226758 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 05:27:48.226767 | orchestrator | Friday 20 February 2026 05:27:10 +0000 (0:00:00.758) 0:31:17.600 ******* 2026-02-20 05:27:48.226786 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.226795 | orchestrator | 2026-02-20 05:27:48.226804 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 05:27:48.226813 | orchestrator | Friday 20 February 2026 05:27:10 +0000 (0:00:00.783) 0:31:18.384 ******* 2026-02-20 05:27:48.226822 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.226831 | orchestrator | 2026-02-20 05:27:48.226840 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 05:27:48.226849 | orchestrator | Friday 20 February 2026 05:27:11 +0000 (0:00:00.770) 0:31:19.155 ******* 2026-02-20 05:27:48.226858 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.226867 | orchestrator | 2026-02-20 05:27:48.226876 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 05:27:48.226885 | orchestrator | Friday 20 February 2026 05:27:12 +0000 (0:00:00.767) 0:31:19.923 ******* 2026-02-20 05:27:48.226894 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.226903 | orchestrator | 2026-02-20 05:27:48.226912 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 05:27:48.226922 | orchestrator | Friday 20 February 2026 05:27:13 +0000 (0:00:00.755) 0:31:20.678 ******* 2026-02-20 05:27:48.226931 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.226940 | orchestrator | 2026-02-20 05:27:48.226949 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 05:27:48.226958 | orchestrator | Friday 20 February 2026 05:27:13 +0000 (0:00:00.755) 0:31:21.433 ******* 2026-02-20 05:27:48.226967 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.226976 | orchestrator | 2026-02-20 05:27:48.226984 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 05:27:48.226993 | orchestrator | Friday 20 February 2026 05:27:14 +0000 (0:00:00.762) 0:31:22.196 ******* 2026-02-20 05:27:48.227024 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.227040 | orchestrator | 2026-02-20 05:27:48.227055 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 05:27:48.227071 | orchestrator | Friday 20 February 2026 05:27:15 +0000 (0:00:00.747) 0:31:22.943 ******* 2026-02-20 05:27:48.227086 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.227126 | orchestrator | 2026-02-20 05:27:48.227140 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 05:27:48.227156 | orchestrator | Friday 20 February 2026 05:27:16 +0000 (0:00:00.764) 0:31:23.707 ******* 2026-02-20 05:27:48.227171 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.227186 | orchestrator | 2026-02-20 05:27:48.227200 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 05:27:48.227215 | orchestrator | Friday 20 February 2026 05:27:16 +0000 (0:00:00.769) 0:31:24.477 ******* 2026-02-20 05:27:48.227228 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.227243 | orchestrator | 2026-02-20 05:27:48.227257 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 05:27:48.227272 | orchestrator | Friday 20 February 2026 05:27:17 +0000 (0:00:00.752) 0:31:25.230 ******* 2026-02-20 05:27:48.227288 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.227304 | orchestrator | 2026-02-20 05:27:48.227320 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 05:27:48.227333 | orchestrator | Friday 20 February 2026 05:27:18 +0000 (0:00:00.906) 0:31:26.137 ******* 2026-02-20 05:27:48.227347 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.227362 | orchestrator | 2026-02-20 05:27:48.227376 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 05:27:48.227391 | orchestrator | Friday 20 February 2026 05:27:19 +0000 (0:00:00.755) 0:31:26.893 ******* 2026-02-20 05:27:48.227407 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.227421 | orchestrator | 2026-02-20 05:27:48.227433 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 05:27:48.227442 | orchestrator | Friday 20 February 2026 05:27:20 +0000 (0:00:00.864) 0:31:27.758 ******* 2026-02-20 05:27:48.227451 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.227459 | orchestrator | 2026-02-20 05:27:48.227483 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 05:27:48.227492 | orchestrator | Friday 20 February 2026 05:27:21 +0000 (0:00:00.774) 0:31:28.532 ******* 2026-02-20 05:27:48.227501 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.227509 | orchestrator | 2026-02-20 05:27:48.227519 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:27:48.227529 | orchestrator | Friday 20 February 2026 05:27:21 +0000 (0:00:00.758) 0:31:29.290 ******* 2026-02-20 05:27:48.227537 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.227546 | orchestrator | 2026-02-20 05:27:48.227555 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:27:48.227563 | orchestrator | Friday 20 February 2026 05:27:22 +0000 (0:00:00.784) 0:31:30.075 ******* 2026-02-20 05:27:48.227572 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.227580 | orchestrator | 2026-02-20 05:27:48.227589 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:27:48.227598 | orchestrator | Friday 20 February 2026 05:27:23 +0000 (0:00:00.784) 0:31:30.860 ******* 2026-02-20 05:27:48.227606 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.227615 | orchestrator | 2026-02-20 05:27:48.227642 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:27:48.227657 | orchestrator | Friday 20 February 2026 05:27:24 +0000 (0:00:00.797) 0:31:31.657 ******* 2026-02-20 05:27:48.227670 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.227688 | orchestrator | 2026-02-20 05:27:48.227709 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:27:48.227723 | orchestrator | Friday 20 February 2026 05:27:24 +0000 (0:00:00.780) 0:31:32.438 ******* 2026-02-20 05:27:48.227737 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-20 05:27:48.227751 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-20 05:27:48.227764 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-20 05:27:48.227778 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.227808 | orchestrator | 2026-02-20 05:27:48.227823 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:27:48.227836 | orchestrator | Friday 20 February 2026 05:27:25 +0000 (0:00:01.006) 0:31:33.445 ******* 2026-02-20 05:27:48.227850 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-20 05:27:48.227864 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-20 05:27:48.227879 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-20 05:27:48.227892 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.227906 | orchestrator | 2026-02-20 05:27:48.227919 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:27:48.227934 | orchestrator | Friday 20 February 2026 05:27:26 +0000 (0:00:01.035) 0:31:34.480 ******* 2026-02-20 05:27:48.227947 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-20 05:27:48.227961 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-20 05:27:48.227974 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-20 05:27:48.227988 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.228029 | orchestrator | 2026-02-20 05:27:48.228044 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:27:48.228059 | orchestrator | Friday 20 February 2026 05:27:28 +0000 (0:00:01.068) 0:31:35.548 ******* 2026-02-20 05:27:48.228073 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.228088 | orchestrator | 2026-02-20 05:27:48.228102 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:27:48.228117 | orchestrator | Friday 20 February 2026 05:27:28 +0000 (0:00:00.753) 0:31:36.301 ******* 2026-02-20 05:27:48.228131 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-20 05:27:48.228146 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.228159 | orchestrator | 2026-02-20 05:27:48.228174 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 05:27:48.228188 | orchestrator | Friday 20 February 2026 05:27:29 +0000 (0:00:00.859) 0:31:37.161 ******* 2026-02-20 05:27:48.228204 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:27:48.228218 | orchestrator | 2026-02-20 05:27:48.228234 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-20 05:27:48.228248 | orchestrator | Friday 20 February 2026 05:27:31 +0000 (0:00:01.405) 0:31:38.567 ******* 2026-02-20 05:27:48.228262 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:27:48.228277 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:27:48.228290 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-20 05:27:48.228354 | orchestrator | 2026-02-20 05:27:48.228371 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-20 05:27:48.228385 | orchestrator | Friday 20 February 2026 05:27:32 +0000 (0:00:01.584) 0:31:40.151 ******* 2026-02-20 05:27:48.228399 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-02-20 05:27:48.228414 | orchestrator | 2026-02-20 05:27:48.228428 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-20 05:27:48.228443 | orchestrator | Friday 20 February 2026 05:27:33 +0000 (0:00:01.116) 0:31:41.267 ******* 2026-02-20 05:27:48.228457 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:27:48.228472 | orchestrator | 2026-02-20 05:27:48.228482 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-20 05:27:48.228491 | orchestrator | Friday 20 February 2026 05:27:35 +0000 (0:00:01.533) 0:31:42.801 ******* 2026-02-20 05:27:48.228500 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:27:48.228509 | orchestrator | 2026-02-20 05:27:48.228518 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-20 05:27:48.228527 | orchestrator | Friday 20 February 2026 05:27:36 +0000 (0:00:01.104) 0:31:43.905 ******* 2026-02-20 05:27:48.228546 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 05:27:48.228566 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 05:27:48.228575 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 05:27:48.228583 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-02-20 05:27:48.228592 | orchestrator | 2026-02-20 05:27:48.228601 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-20 05:27:48.228609 | orchestrator | Friday 20 February 2026 05:27:43 +0000 (0:00:07.243) 0:31:51.148 ******* 2026-02-20 05:27:48.228618 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:27:48.228627 | orchestrator | 2026-02-20 05:27:48.228635 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-20 05:27:48.228644 | orchestrator | Friday 20 February 2026 05:27:44 +0000 (0:00:01.203) 0:31:52.352 ******* 2026-02-20 05:27:48.228653 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-20 05:27:48.228661 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-20 05:27:48.228670 | orchestrator | 2026-02-20 05:27:48.228679 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-20 05:27:48.228700 | orchestrator | Friday 20 February 2026 05:27:48 +0000 (0:00:03.341) 0:31:55.694 ******* 2026-02-20 05:28:30.774362 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-20 05:28:30.774492 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-20 05:28:30.774511 | orchestrator | 2026-02-20 05:28:30.774523 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-20 05:28:30.774535 | orchestrator | Friday 20 February 2026 05:27:50 +0000 (0:00:02.023) 0:31:57.717 ******* 2026-02-20 05:28:30.774545 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:28:30.774555 | orchestrator | 2026-02-20 05:28:30.774565 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-20 05:28:30.774576 | orchestrator | Friday 20 February 2026 05:27:51 +0000 (0:00:01.537) 0:31:59.254 ******* 2026-02-20 05:28:30.774585 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:28:30.774596 | orchestrator | 2026-02-20 05:28:30.774606 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-20 05:28:30.774616 | orchestrator | Friday 20 February 2026 05:27:52 +0000 (0:00:00.782) 0:32:00.037 ******* 2026-02-20 05:28:30.774626 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:28:30.774634 | orchestrator | 2026-02-20 05:28:30.774642 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-20 05:28:30.774650 | orchestrator | Friday 20 February 2026 05:27:53 +0000 (0:00:00.757) 0:32:00.794 ******* 2026-02-20 05:28:30.774658 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-02-20 05:28:30.774667 | orchestrator | 2026-02-20 05:28:30.774675 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-20 05:28:30.774683 | orchestrator | Friday 20 February 2026 05:27:54 +0000 (0:00:01.148) 0:32:01.943 ******* 2026-02-20 05:28:30.774691 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:28:30.774699 | orchestrator | 2026-02-20 05:28:30.774707 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-20 05:28:30.774715 | orchestrator | Friday 20 February 2026 05:27:55 +0000 (0:00:01.156) 0:32:03.100 ******* 2026-02-20 05:28:30.774723 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:28:30.774731 | orchestrator | 2026-02-20 05:28:30.774739 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-20 05:28:30.774747 | orchestrator | Friday 20 February 2026 05:27:56 +0000 (0:00:01.135) 0:32:04.236 ******* 2026-02-20 05:28:30.774755 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-02-20 05:28:30.774764 | orchestrator | 2026-02-20 05:28:30.774772 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-20 05:28:30.774780 | orchestrator | Friday 20 February 2026 05:27:57 +0000 (0:00:01.113) 0:32:05.350 ******* 2026-02-20 05:28:30.774788 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:28:30.774818 | orchestrator | 2026-02-20 05:28:30.774827 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-20 05:28:30.774835 | orchestrator | Friday 20 February 2026 05:28:00 +0000 (0:00:02.150) 0:32:07.503 ******* 2026-02-20 05:28:30.774843 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:28:30.774853 | orchestrator | 2026-02-20 05:28:30.774862 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-20 05:28:30.774872 | orchestrator | Friday 20 February 2026 05:28:02 +0000 (0:00:01.989) 0:32:09.493 ******* 2026-02-20 05:28:30.774881 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:28:30.774890 | orchestrator | 2026-02-20 05:28:30.774899 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-20 05:28:30.774909 | orchestrator | Friday 20 February 2026 05:28:04 +0000 (0:00:02.519) 0:32:12.013 ******* 2026-02-20 05:28:30.774918 | orchestrator | changed: [testbed-node-2] 2026-02-20 05:28:30.774930 | orchestrator | 2026-02-20 05:28:30.774944 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-20 05:28:30.774958 | orchestrator | Friday 20 February 2026 05:28:08 +0000 (0:00:03.917) 0:32:15.931 ******* 2026-02-20 05:28:30.774971 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-20 05:28:30.774985 | orchestrator | 2026-02-20 05:28:30.774999 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-20 05:28:30.775013 | orchestrator | Friday 20 February 2026 05:28:09 +0000 (0:00:01.473) 0:32:17.404 ******* 2026-02-20 05:28:30.775056 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:28:30.775070 | orchestrator | 2026-02-20 05:28:30.775080 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-20 05:28:30.775089 | orchestrator | Friday 20 February 2026 05:28:12 +0000 (0:00:02.485) 0:32:19.890 ******* 2026-02-20 05:28:30.775099 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:28:30.775108 | orchestrator | 2026-02-20 05:28:30.775117 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-20 05:28:30.775141 | orchestrator | Friday 20 February 2026 05:28:15 +0000 (0:00:02.699) 0:32:22.590 ******* 2026-02-20 05:28:30.775151 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:28:30.775172 | orchestrator | 2026-02-20 05:28:30.775181 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-20 05:28:30.775191 | orchestrator | Friday 20 February 2026 05:28:16 +0000 (0:00:01.311) 0:32:23.902 ******* 2026-02-20 05:28:30.775201 | orchestrator | ok: [testbed-node-2] 2026-02-20 05:28:30.775210 | orchestrator | 2026-02-20 05:28:30.775219 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-20 05:28:30.775229 | orchestrator | Friday 20 February 2026 05:28:17 +0000 (0:00:01.163) 0:32:25.065 ******* 2026-02-20 05:28:30.775238 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-20 05:28:30.775247 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-20 05:28:30.775255 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:28:30.775263 | orchestrator | 2026-02-20 05:28:30.775271 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-20 05:28:30.775279 | orchestrator | Friday 20 February 2026 05:28:18 +0000 (0:00:01.333) 0:32:26.399 ******* 2026-02-20 05:28:30.775287 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-20 05:28:30.775296 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-20 05:28:30.775327 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-20 05:28:30.775342 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-20 05:28:30.775355 | orchestrator | skipping: [testbed-node-2] 2026-02-20 05:28:30.775369 | orchestrator | 2026-02-20 05:28:30.775382 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-02-20 05:28:30.775396 | orchestrator | 2026-02-20 05:28:30.775411 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:28:30.775439 | orchestrator | Friday 20 February 2026 05:28:20 +0000 (0:00:01.879) 0:32:28.278 ******* 2026-02-20 05:28:30.775454 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:28:30.775468 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:28:30.775483 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:28:30.775497 | orchestrator | 2026-02-20 05:28:30.775512 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:28:30.775524 | orchestrator | Friday 20 February 2026 05:28:22 +0000 (0:00:01.644) 0:32:29.922 ******* 2026-02-20 05:28:30.775532 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:28:30.775540 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:28:30.775548 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:28:30.775555 | orchestrator | 2026-02-20 05:28:30.775564 | orchestrator | TASK [Get pool list] *********************************************************** 2026-02-20 05:28:30.775571 | orchestrator | Friday 20 February 2026 05:28:23 +0000 (0:00:01.541) 0:32:31.464 ******* 2026-02-20 05:28:30.775579 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:28:30.775587 | orchestrator | 2026-02-20 05:28:30.775595 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-02-20 05:28:30.775603 | orchestrator | Friday 20 February 2026 05:28:27 +0000 (0:00:03.101) 0:32:34.566 ******* 2026-02-20 05:28:30.775611 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:28:30.775619 | orchestrator | 2026-02-20 05:28:30.775627 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-02-20 05:28:30.775635 | orchestrator | Friday 20 February 2026 05:28:30 +0000 (0:00:03.153) 0:32:37.720 ******* 2026-02-20 05:28:30.775653 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-02-20T02:53:40.710711+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-20 05:28:30.775677 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-02-20T02:54:47.977060+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '35', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-20 05:28:31.471577 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-02-20T02:54:51.659556+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '84', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-20 05:28:31.471674 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-02-20T02:55:49.064373+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '71', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-20 05:28:31.471707 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-02-20T02:55:55.224317+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-20 05:28:31.471717 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-02-20T02:56:01.403126+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-20 05:28:31.471742 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-02-20T02:56:07.611958+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '185', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '75', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-20 05:28:32.923514 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-02-20T02:56:13.838317+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '75', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-20 05:28:32.923657 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-02-20T02:56:25.713674+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '77', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-20 05:28:32.923701 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-02-20T02:57:07.775263+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '90', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 90, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-20 05:28:32.923739 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-02-20T02:57:17.025110+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '98', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 98, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-20 05:28:32.923763 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-02-20T02:57:25.785424+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '195', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 195, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-20 05:30:10.390814 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-02-20T02:57:34.688512+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '111', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 111, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-20 05:30:10.390926 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-02-20T02:57:42.773148+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '117', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 117, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-20 05:30:10.390953 | orchestrator | 2026-02-20 05:30:10.390970 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-02-20 05:30:10.390976 | orchestrator | Friday 20 February 2026 05:28:32 +0000 (0:00:02.681) 0:32:40.401 ******* 2026-02-20 05:30:10.390981 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:30:10.390986 | orchestrator | 2026-02-20 05:30:10.390989 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-02-20 05:30:10.390993 | orchestrator | Friday 20 February 2026 05:28:35 +0000 (0:00:03.066) 0:32:43.468 ******* 2026-02-20 05:30:10.390997 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-20 05:30:10.391003 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-20 05:30:10.391007 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-20 05:30:10.391011 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-20 05:30:10.391017 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-20 05:30:10.391021 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-20 05:30:10.391025 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-20 05:30:10.391029 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-20 05:30:10.391033 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-20 05:30:10.391037 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-20 05:30:10.391040 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-20 05:30:10.391044 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-20 05:30:10.391054 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-20 05:30:10.391058 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-20 05:30:10.391100 | orchestrator | 2026-02-20 05:30:10.391104 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-02-20 05:30:10.391111 | orchestrator | Friday 20 February 2026 05:29:53 +0000 (0:01:17.033) 0:34:00.502 ******* 2026-02-20 05:30:10.391115 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-20 05:30:10.391120 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-20 05:30:10.391124 | orchestrator | 2026-02-20 05:30:10.391127 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-20 05:30:10.391131 | orchestrator | 2026-02-20 05:30:10.391135 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:30:10.391139 | orchestrator | Friday 20 February 2026 05:29:59 +0000 (0:00:06.262) 0:34:06.765 ******* 2026-02-20 05:30:10.391143 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-20 05:30:10.391147 | orchestrator | 2026-02-20 05:30:10.391151 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 05:30:10.391154 | orchestrator | Friday 20 February 2026 05:30:00 +0000 (0:00:01.299) 0:34:08.064 ******* 2026-02-20 05:30:10.391158 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:30:10.391162 | orchestrator | 2026-02-20 05:30:10.391178 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 05:30:10.391182 | orchestrator | Friday 20 February 2026 05:30:02 +0000 (0:00:01.451) 0:34:09.515 ******* 2026-02-20 05:30:10.391186 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:30:10.391190 | orchestrator | 2026-02-20 05:30:10.391194 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:30:10.391198 | orchestrator | Friday 20 February 2026 05:30:03 +0000 (0:00:01.118) 0:34:10.634 ******* 2026-02-20 05:30:10.391201 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:30:10.391205 | orchestrator | 2026-02-20 05:30:10.391209 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:30:10.391213 | orchestrator | Friday 20 February 2026 05:30:04 +0000 (0:00:01.404) 0:34:12.039 ******* 2026-02-20 05:30:10.391217 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:30:10.391221 | orchestrator | 2026-02-20 05:30:10.391224 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 05:30:10.391228 | orchestrator | Friday 20 February 2026 05:30:05 +0000 (0:00:01.150) 0:34:13.189 ******* 2026-02-20 05:30:10.391232 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:30:10.391236 | orchestrator | 2026-02-20 05:30:10.391240 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 05:30:10.391244 | orchestrator | Friday 20 February 2026 05:30:06 +0000 (0:00:01.137) 0:34:14.327 ******* 2026-02-20 05:30:10.391247 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:30:10.391251 | orchestrator | 2026-02-20 05:30:10.391255 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 05:30:10.391259 | orchestrator | Friday 20 February 2026 05:30:08 +0000 (0:00:01.246) 0:34:15.574 ******* 2026-02-20 05:30:10.391263 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:30:10.391267 | orchestrator | 2026-02-20 05:30:10.391270 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 05:30:10.391274 | orchestrator | Friday 20 February 2026 05:30:09 +0000 (0:00:01.127) 0:34:16.702 ******* 2026-02-20 05:30:10.391278 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:30:10.391282 | orchestrator | 2026-02-20 05:30:10.391290 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 05:30:36.008420 | orchestrator | Friday 20 February 2026 05:30:10 +0000 (0:00:01.158) 0:34:17.861 ******* 2026-02-20 05:30:36.008539 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:30:36.008591 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:30:36.008609 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:30:36.008626 | orchestrator | 2026-02-20 05:30:36.008643 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 05:30:36.008655 | orchestrator | Friday 20 February 2026 05:30:12 +0000 (0:00:01.956) 0:34:19.818 ******* 2026-02-20 05:30:36.008665 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:30:36.008675 | orchestrator | 2026-02-20 05:30:36.008684 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 05:30:36.008693 | orchestrator | Friday 20 February 2026 05:30:13 +0000 (0:00:01.261) 0:34:21.079 ******* 2026-02-20 05:30:36.008702 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:30:36.008711 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:30:36.008719 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:30:36.008728 | orchestrator | 2026-02-20 05:30:36.008737 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 05:30:36.008746 | orchestrator | Friday 20 February 2026 05:30:16 +0000 (0:00:03.192) 0:34:24.271 ******* 2026-02-20 05:30:36.008756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-20 05:30:36.008765 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-20 05:30:36.008774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-20 05:30:36.008783 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:30:36.008792 | orchestrator | 2026-02-20 05:30:36.008801 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 05:30:36.008810 | orchestrator | Friday 20 February 2026 05:30:18 +0000 (0:00:01.733) 0:34:26.004 ******* 2026-02-20 05:30:36.008820 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 05:30:36.008847 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 05:30:36.008857 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 05:30:36.008866 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:30:36.008874 | orchestrator | 2026-02-20 05:30:36.008883 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 05:30:36.008892 | orchestrator | Friday 20 February 2026 05:30:20 +0000 (0:00:01.961) 0:34:27.966 ******* 2026-02-20 05:30:36.008903 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:36.008914 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:36.008924 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:36.008943 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:30:36.008952 | orchestrator | 2026-02-20 05:30:36.008960 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 05:30:36.008969 | orchestrator | Friday 20 February 2026 05:30:21 +0000 (0:00:01.126) 0:34:29.092 ******* 2026-02-20 05:30:36.008997 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fa7fdf54d9d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 05:30:14.130617', 'end': '2026-02-20 05:30:14.189565', 'delta': '0:00:00.058948', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fa7fdf54d9d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 05:30:36.009009 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9edb2d04dfb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 05:30:15.029611', 'end': '2026-02-20 05:30:15.074799', 'delta': '0:00:00.045188', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9edb2d04dfb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 05:30:36.009023 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '18841505eff4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 05:30:15.598757', 'end': '2026-02-20 05:30:15.651724', 'delta': '0:00:00.052967', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['18841505eff4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 05:30:36.009033 | orchestrator | 2026-02-20 05:30:36.009042 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 05:30:36.009056 | orchestrator | Friday 20 February 2026 05:30:22 +0000 (0:00:01.164) 0:34:30.257 ******* 2026-02-20 05:30:36.009071 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:30:36.009113 | orchestrator | 2026-02-20 05:30:36.009129 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 05:30:36.009145 | orchestrator | Friday 20 February 2026 05:30:23 +0000 (0:00:01.204) 0:34:31.461 ******* 2026-02-20 05:30:36.009160 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:30:36.009175 | orchestrator | 2026-02-20 05:30:36.009184 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 05:30:36.009193 | orchestrator | Friday 20 February 2026 05:30:25 +0000 (0:00:01.167) 0:34:32.628 ******* 2026-02-20 05:30:36.009202 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:30:36.009211 | orchestrator | 2026-02-20 05:30:36.009219 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 05:30:36.009238 | orchestrator | Friday 20 February 2026 05:30:26 +0000 (0:00:01.103) 0:34:33.732 ******* 2026-02-20 05:30:36.009247 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:30:36.009256 | orchestrator | 2026-02-20 05:30:36.009265 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:30:36.009273 | orchestrator | Friday 20 February 2026 05:30:29 +0000 (0:00:02.920) 0:34:36.652 ******* 2026-02-20 05:30:36.009282 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:30:36.009291 | orchestrator | 2026-02-20 05:30:36.009300 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 05:30:36.009309 | orchestrator | Friday 20 February 2026 05:30:30 +0000 (0:00:01.108) 0:34:37.761 ******* 2026-02-20 05:30:36.009317 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:30:36.009326 | orchestrator | 2026-02-20 05:30:36.009335 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 05:30:36.009344 | orchestrator | Friday 20 February 2026 05:30:31 +0000 (0:00:01.124) 0:34:38.886 ******* 2026-02-20 05:30:36.009353 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:30:36.009362 | orchestrator | 2026-02-20 05:30:36.009370 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:30:36.009379 | orchestrator | Friday 20 February 2026 05:30:32 +0000 (0:00:01.216) 0:34:40.103 ******* 2026-02-20 05:30:36.009398 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:30:36.009407 | orchestrator | 2026-02-20 05:30:36.009416 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 05:30:36.009425 | orchestrator | Friday 20 February 2026 05:30:33 +0000 (0:00:01.119) 0:34:41.223 ******* 2026-02-20 05:30:36.009433 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:30:36.009442 | orchestrator | 2026-02-20 05:30:36.009451 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 05:30:36.009460 | orchestrator | Friday 20 February 2026 05:30:34 +0000 (0:00:01.102) 0:34:42.325 ******* 2026-02-20 05:30:36.009476 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:30:40.649769 | orchestrator | 2026-02-20 05:30:40.649861 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 05:30:40.649874 | orchestrator | Friday 20 February 2026 05:30:35 +0000 (0:00:01.155) 0:34:43.481 ******* 2026-02-20 05:30:40.649882 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:30:40.649893 | orchestrator | 2026-02-20 05:30:40.649901 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 05:30:40.649909 | orchestrator | Friday 20 February 2026 05:30:37 +0000 (0:00:01.096) 0:34:44.578 ******* 2026-02-20 05:30:40.649917 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:30:40.649926 | orchestrator | 2026-02-20 05:30:40.649934 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 05:30:40.649943 | orchestrator | Friday 20 February 2026 05:30:38 +0000 (0:00:01.156) 0:34:45.734 ******* 2026-02-20 05:30:40.649951 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:30:40.649959 | orchestrator | 2026-02-20 05:30:40.649967 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 05:30:40.649977 | orchestrator | Friday 20 February 2026 05:30:39 +0000 (0:00:01.068) 0:34:46.802 ******* 2026-02-20 05:30:40.649985 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:30:40.649992 | orchestrator | 2026-02-20 05:30:40.650000 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 05:30:40.650008 | orchestrator | Friday 20 February 2026 05:30:40 +0000 (0:00:01.138) 0:34:47.941 ******* 2026-02-20 05:30:40.650061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:30:40.650155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2', 'dm-uuid-LVM-rogr1DxRDGs07Ii1eVUD20MBBAvjpXWjQ80pE2aubUFxb6wQZQZD6n80uN1wncJx'], 'uuids': ['22c82636-cfd1-4dcd-a18c-9fa46a681fb3'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '65f3eac9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx']}})  2026-02-20 05:30:40.650169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25', 'scsi-SQEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '072c6774', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:30:40.650178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-8hqq24-gM8B-HGCg-NRBz-x5vV-O3o1-WzQB2w', 'scsi-0QEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737', 'scsi-SQEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4d1d8767', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f']}})  2026-02-20 05:30:40.650188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:30:40.650212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:30:40.650221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 05:30:40.650229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:30:40.650243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6', 'dm-uuid-CRYPT-LUKS2-6ffa85ca31b34ffaa66b3499bdbb76c6-jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 05:30:40.650255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:30:40.650264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f', 'dm-uuid-LVM-N05Drt313VJO8ej73Med2uhl7Q3NfCOLjUHptgJks7soG8vnX2SubfjS5bfpvFW6'], 'uuids': ['6ffa85ca-31b3-4ffa-a66b-3499bdbb76c6'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4d1d8767', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6']}})  2026-02-20 05:30:40.650273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-22rEKG-WFfn-O0Qj-YL45-EVdC-t8yv-UVN2TG', 'scsi-0QEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2', 'scsi-SQEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '65f3eac9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2']}})  2026-02-20 05:30:40.650288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:30:41.936788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd0ac2488', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:30:41.936928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:30:41.936953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:30:41.936968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx', 'dm-uuid-CRYPT-LUKS2-22c82636cfd14dcda18c9fa46a681fb3-Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 05:30:41.936983 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:30:41.936999 | orchestrator | 2026-02-20 05:30:41.937013 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 05:30:41.937028 | orchestrator | Friday 20 February 2026 05:30:41 +0000 (0:00:01.254) 0:34:49.196 ******* 2026-02-20 05:30:41.937065 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:41.937131 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2', 'dm-uuid-LVM-rogr1DxRDGs07Ii1eVUD20MBBAvjpXWjQ80pE2aubUFxb6wQZQZD6n80uN1wncJx'], 'uuids': ['22c82636-cfd1-4dcd-a18c-9fa46a681fb3'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '65f3eac9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:41.937172 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25', 'scsi-SQEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '072c6774', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:41.937190 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-8hqq24-gM8B-HGCg-NRBz-x5vV-O3o1-WzQB2w', 'scsi-0QEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737', 'scsi-SQEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4d1d8767', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:41.937207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:41.937226 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:43.123512 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:43.123603 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:43.123623 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6', 'dm-uuid-CRYPT-LUKS2-6ffa85ca31b34ffaa66b3499bdbb76c6-jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:43.123629 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:43.123636 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f', 'dm-uuid-LVM-N05Drt313VJO8ej73Med2uhl7Q3NfCOLjUHptgJks7soG8vnX2SubfjS5bfpvFW6'], 'uuids': ['6ffa85ca-31b3-4ffa-a66b-3499bdbb76c6'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4d1d8767', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:43.123656 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-22rEKG-WFfn-O0Qj-YL45-EVdC-t8yv-UVN2TG', 'scsi-0QEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2', 'scsi-SQEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '65f3eac9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:43.123670 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:43.123681 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd0ac2488', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:43.123688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:30:43.123698 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:31:20.992026 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx', 'dm-uuid-CRYPT-LUKS2-22c82636cfd14dcda18c9fa46a681fb3-Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:31:20.992200 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:31:20.992220 | orchestrator | 2026-02-20 05:31:20.992231 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 05:31:20.992243 | orchestrator | Friday 20 February 2026 05:30:43 +0000 (0:00:01.399) 0:34:50.595 ******* 2026-02-20 05:31:20.992253 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:31:20.992264 | orchestrator | 2026-02-20 05:31:20.992289 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 05:31:20.992300 | orchestrator | Friday 20 February 2026 05:30:44 +0000 (0:00:01.506) 0:34:52.101 ******* 2026-02-20 05:31:20.992310 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:31:20.992320 | orchestrator | 2026-02-20 05:31:20.992330 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:31:20.992339 | orchestrator | Friday 20 February 2026 05:30:45 +0000 (0:00:01.138) 0:34:53.240 ******* 2026-02-20 05:31:20.992349 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:31:20.992359 | orchestrator | 2026-02-20 05:31:20.992371 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:31:20.992387 | orchestrator | Friday 20 February 2026 05:30:47 +0000 (0:00:01.479) 0:34:54.720 ******* 2026-02-20 05:31:20.992402 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:31:20.992417 | orchestrator | 2026-02-20 05:31:20.992432 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:31:20.992447 | orchestrator | Friday 20 February 2026 05:30:48 +0000 (0:00:01.126) 0:34:55.847 ******* 2026-02-20 05:31:20.992463 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:31:20.992479 | orchestrator | 2026-02-20 05:31:20.992496 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:31:20.992514 | orchestrator | Friday 20 February 2026 05:30:49 +0000 (0:00:01.215) 0:34:57.062 ******* 2026-02-20 05:31:20.992532 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:31:20.992550 | orchestrator | 2026-02-20 05:31:20.992567 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 05:31:20.992585 | orchestrator | Friday 20 February 2026 05:30:50 +0000 (0:00:01.169) 0:34:58.232 ******* 2026-02-20 05:31:20.992605 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-20 05:31:20.992625 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-20 05:31:20.992643 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-20 05:31:20.992662 | orchestrator | 2026-02-20 05:31:20.992681 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 05:31:20.992700 | orchestrator | Friday 20 February 2026 05:30:52 +0000 (0:00:01.927) 0:35:00.160 ******* 2026-02-20 05:31:20.992719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-20 05:31:20.992768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-20 05:31:20.992788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-20 05:31:20.992806 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:31:20.992823 | orchestrator | 2026-02-20 05:31:20.992841 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 05:31:20.992858 | orchestrator | Friday 20 February 2026 05:30:53 +0000 (0:00:01.143) 0:35:01.304 ******* 2026-02-20 05:31:20.992876 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-20 05:31:20.992893 | orchestrator | 2026-02-20 05:31:20.992912 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:31:20.992931 | orchestrator | Friday 20 February 2026 05:30:55 +0000 (0:00:01.203) 0:35:02.507 ******* 2026-02-20 05:31:20.992949 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:31:20.992966 | orchestrator | 2026-02-20 05:31:20.992983 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:31:20.993000 | orchestrator | Friday 20 February 2026 05:30:56 +0000 (0:00:01.177) 0:35:03.684 ******* 2026-02-20 05:31:20.993016 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:31:20.993032 | orchestrator | 2026-02-20 05:31:20.993048 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:31:20.993064 | orchestrator | Friday 20 February 2026 05:30:57 +0000 (0:00:01.136) 0:35:04.821 ******* 2026-02-20 05:31:20.993081 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:31:20.993126 | orchestrator | 2026-02-20 05:31:20.993139 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:31:20.993149 | orchestrator | Friday 20 February 2026 05:30:58 +0000 (0:00:01.125) 0:35:05.946 ******* 2026-02-20 05:31:20.993159 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:31:20.993168 | orchestrator | 2026-02-20 05:31:20.993178 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:31:20.993187 | orchestrator | Friday 20 February 2026 05:30:59 +0000 (0:00:01.246) 0:35:07.193 ******* 2026-02-20 05:31:20.993197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 05:31:20.993227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 05:31:20.993237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 05:31:20.993247 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:31:20.993257 | orchestrator | 2026-02-20 05:31:20.993267 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:31:20.993277 | orchestrator | Friday 20 February 2026 05:31:01 +0000 (0:00:01.428) 0:35:08.621 ******* 2026-02-20 05:31:20.993287 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 05:31:20.993297 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 05:31:20.993306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 05:31:20.993316 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:31:20.993326 | orchestrator | 2026-02-20 05:31:20.993336 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:31:20.993346 | orchestrator | Friday 20 February 2026 05:31:02 +0000 (0:00:01.365) 0:35:09.986 ******* 2026-02-20 05:31:20.993355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 05:31:20.993365 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 05:31:20.993375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 05:31:20.993385 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:31:20.993395 | orchestrator | 2026-02-20 05:31:20.993405 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:31:20.993423 | orchestrator | Friday 20 February 2026 05:31:03 +0000 (0:00:01.365) 0:35:11.351 ******* 2026-02-20 05:31:20.993434 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:31:20.993454 | orchestrator | 2026-02-20 05:31:20.993464 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:31:20.993474 | orchestrator | Friday 20 February 2026 05:31:04 +0000 (0:00:01.120) 0:35:12.472 ******* 2026-02-20 05:31:20.993483 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-20 05:31:20.993493 | orchestrator | 2026-02-20 05:31:20.993503 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 05:31:20.993513 | orchestrator | Friday 20 February 2026 05:31:06 +0000 (0:00:01.306) 0:35:13.778 ******* 2026-02-20 05:31:20.993523 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:31:20.993533 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:31:20.993542 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:31:20.993552 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-20 05:31:20.993562 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:31:20.993572 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:31:20.993582 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:31:20.993592 | orchestrator | 2026-02-20 05:31:20.993602 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 05:31:20.993612 | orchestrator | Friday 20 February 2026 05:31:08 +0000 (0:00:02.100) 0:35:15.879 ******* 2026-02-20 05:31:20.993621 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:31:20.993631 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:31:20.993641 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:31:20.993651 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-20 05:31:20.993660 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:31:20.993670 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:31:20.993680 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:31:20.993690 | orchestrator | 2026-02-20 05:31:20.993700 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-20 05:31:20.993710 | orchestrator | Friday 20 February 2026 05:31:11 +0000 (0:00:02.836) 0:35:18.716 ******* 2026-02-20 05:31:20.993719 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:31:20.993729 | orchestrator | 2026-02-20 05:31:20.993739 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-20 05:31:20.993749 | orchestrator | Friday 20 February 2026 05:31:12 +0000 (0:00:01.518) 0:35:20.234 ******* 2026-02-20 05:31:20.993759 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:31:20.993769 | orchestrator | 2026-02-20 05:31:20.993779 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-20 05:31:20.993789 | orchestrator | Friday 20 February 2026 05:31:13 +0000 (0:00:01.192) 0:35:21.427 ******* 2026-02-20 05:31:20.993798 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:31:20.993808 | orchestrator | 2026-02-20 05:31:20.993818 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-20 05:31:20.993828 | orchestrator | Friday 20 February 2026 05:31:15 +0000 (0:00:01.249) 0:35:22.677 ******* 2026-02-20 05:31:20.993838 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-20 05:31:20.993848 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-20 05:31:20.993858 | orchestrator | 2026-02-20 05:31:20.993868 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 05:31:20.993878 | orchestrator | Friday 20 February 2026 05:31:19 +0000 (0:00:04.159) 0:35:26.837 ******* 2026-02-20 05:31:20.993888 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-20 05:31:20.993918 | orchestrator | 2026-02-20 05:31:20.993929 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 05:31:20.993945 | orchestrator | Friday 20 February 2026 05:31:20 +0000 (0:00:01.622) 0:35:28.459 ******* 2026-02-20 05:32:11.346401 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-20 05:32:11.346519 | orchestrator | 2026-02-20 05:32:11.346535 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 05:32:11.346547 | orchestrator | Friday 20 February 2026 05:31:22 +0000 (0:00:01.101) 0:35:29.561 ******* 2026-02-20 05:32:11.346558 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.346570 | orchestrator | 2026-02-20 05:32:11.346582 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 05:32:11.346593 | orchestrator | Friday 20 February 2026 05:31:23 +0000 (0:00:01.097) 0:35:30.658 ******* 2026-02-20 05:32:11.346604 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:11.346617 | orchestrator | 2026-02-20 05:32:11.346627 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 05:32:11.346638 | orchestrator | Friday 20 February 2026 05:31:24 +0000 (0:00:01.527) 0:35:32.186 ******* 2026-02-20 05:32:11.346649 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:11.346660 | orchestrator | 2026-02-20 05:32:11.346671 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 05:32:11.346681 | orchestrator | Friday 20 February 2026 05:31:26 +0000 (0:00:01.592) 0:35:33.778 ******* 2026-02-20 05:32:11.346692 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:11.346703 | orchestrator | 2026-02-20 05:32:11.346714 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 05:32:11.346740 | orchestrator | Friday 20 February 2026 05:31:27 +0000 (0:00:01.547) 0:35:35.326 ******* 2026-02-20 05:32:11.346752 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.346763 | orchestrator | 2026-02-20 05:32:11.346773 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 05:32:11.346784 | orchestrator | Friday 20 February 2026 05:31:28 +0000 (0:00:01.122) 0:35:36.449 ******* 2026-02-20 05:32:11.346795 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.346806 | orchestrator | 2026-02-20 05:32:11.346817 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 05:32:11.346828 | orchestrator | Friday 20 February 2026 05:31:30 +0000 (0:00:01.122) 0:35:37.572 ******* 2026-02-20 05:32:11.346838 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.346849 | orchestrator | 2026-02-20 05:32:11.346860 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 05:32:11.346872 | orchestrator | Friday 20 February 2026 05:31:31 +0000 (0:00:01.144) 0:35:38.716 ******* 2026-02-20 05:32:11.346883 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:11.346894 | orchestrator | 2026-02-20 05:32:11.346920 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 05:32:11.346942 | orchestrator | Friday 20 February 2026 05:31:32 +0000 (0:00:01.550) 0:35:40.266 ******* 2026-02-20 05:32:11.346956 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:11.346969 | orchestrator | 2026-02-20 05:32:11.346981 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 05:32:11.346994 | orchestrator | Friday 20 February 2026 05:31:34 +0000 (0:00:01.527) 0:35:41.794 ******* 2026-02-20 05:32:11.347006 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.347019 | orchestrator | 2026-02-20 05:32:11.347032 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 05:32:11.347044 | orchestrator | Friday 20 February 2026 05:31:35 +0000 (0:00:01.106) 0:35:42.901 ******* 2026-02-20 05:32:11.347057 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.347069 | orchestrator | 2026-02-20 05:32:11.347081 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 05:32:11.347092 | orchestrator | Friday 20 February 2026 05:31:36 +0000 (0:00:01.109) 0:35:44.010 ******* 2026-02-20 05:32:11.347144 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:11.347156 | orchestrator | 2026-02-20 05:32:11.347167 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 05:32:11.347178 | orchestrator | Friday 20 February 2026 05:31:37 +0000 (0:00:01.150) 0:35:45.161 ******* 2026-02-20 05:32:11.347189 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:11.347200 | orchestrator | 2026-02-20 05:32:11.347211 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 05:32:11.347222 | orchestrator | Friday 20 February 2026 05:31:38 +0000 (0:00:01.174) 0:35:46.335 ******* 2026-02-20 05:32:11.347233 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:11.347244 | orchestrator | 2026-02-20 05:32:11.347254 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 05:32:11.347265 | orchestrator | Friday 20 February 2026 05:31:39 +0000 (0:00:01.115) 0:35:47.451 ******* 2026-02-20 05:32:11.347276 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.347287 | orchestrator | 2026-02-20 05:32:11.347298 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 05:32:11.347309 | orchestrator | Friday 20 February 2026 05:31:41 +0000 (0:00:01.140) 0:35:48.592 ******* 2026-02-20 05:32:11.347320 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.347330 | orchestrator | 2026-02-20 05:32:11.347341 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 05:32:11.347352 | orchestrator | Friday 20 February 2026 05:31:42 +0000 (0:00:01.114) 0:35:49.706 ******* 2026-02-20 05:32:11.347363 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.347374 | orchestrator | 2026-02-20 05:32:11.347384 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 05:32:11.347395 | orchestrator | Friday 20 February 2026 05:31:43 +0000 (0:00:01.115) 0:35:50.821 ******* 2026-02-20 05:32:11.347406 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:11.347417 | orchestrator | 2026-02-20 05:32:11.347427 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 05:32:11.347438 | orchestrator | Friday 20 February 2026 05:31:44 +0000 (0:00:01.118) 0:35:51.940 ******* 2026-02-20 05:32:11.347449 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:11.347460 | orchestrator | 2026-02-20 05:32:11.347471 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 05:32:11.347482 | orchestrator | Friday 20 February 2026 05:31:45 +0000 (0:00:01.151) 0:35:53.091 ******* 2026-02-20 05:32:11.347492 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.347503 | orchestrator | 2026-02-20 05:32:11.347530 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 05:32:11.347542 | orchestrator | Friday 20 February 2026 05:31:46 +0000 (0:00:01.116) 0:35:54.207 ******* 2026-02-20 05:32:11.347553 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.347564 | orchestrator | 2026-02-20 05:32:11.347575 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 05:32:11.347586 | orchestrator | Friday 20 February 2026 05:31:47 +0000 (0:00:01.140) 0:35:55.348 ******* 2026-02-20 05:32:11.347596 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.347607 | orchestrator | 2026-02-20 05:32:11.347618 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 05:32:11.347629 | orchestrator | Friday 20 February 2026 05:31:49 +0000 (0:00:01.184) 0:35:56.532 ******* 2026-02-20 05:32:11.347640 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.347651 | orchestrator | 2026-02-20 05:32:11.347661 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 05:32:11.347672 | orchestrator | Friday 20 February 2026 05:31:50 +0000 (0:00:01.099) 0:35:57.631 ******* 2026-02-20 05:32:11.347683 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.347694 | orchestrator | 2026-02-20 05:32:11.347705 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 05:32:11.347715 | orchestrator | Friday 20 February 2026 05:31:51 +0000 (0:00:01.105) 0:35:58.736 ******* 2026-02-20 05:32:11.347734 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.347745 | orchestrator | 2026-02-20 05:32:11.347762 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 05:32:11.347773 | orchestrator | Friday 20 February 2026 05:31:52 +0000 (0:00:01.131) 0:35:59.868 ******* 2026-02-20 05:32:11.347784 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.347795 | orchestrator | 2026-02-20 05:32:11.347806 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 05:32:11.347817 | orchestrator | Friday 20 February 2026 05:31:53 +0000 (0:00:01.171) 0:36:01.039 ******* 2026-02-20 05:32:11.347828 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.347839 | orchestrator | 2026-02-20 05:32:11.347850 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 05:32:11.347860 | orchestrator | Friday 20 February 2026 05:31:54 +0000 (0:00:01.100) 0:36:02.140 ******* 2026-02-20 05:32:11.347871 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.347882 | orchestrator | 2026-02-20 05:32:11.347892 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 05:32:11.347903 | orchestrator | Friday 20 February 2026 05:31:55 +0000 (0:00:01.129) 0:36:03.270 ******* 2026-02-20 05:32:11.347914 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.347925 | orchestrator | 2026-02-20 05:32:11.347936 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 05:32:11.347947 | orchestrator | Friday 20 February 2026 05:31:56 +0000 (0:00:01.125) 0:36:04.396 ******* 2026-02-20 05:32:11.347957 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.347968 | orchestrator | 2026-02-20 05:32:11.347979 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 05:32:11.347990 | orchestrator | Friday 20 February 2026 05:31:58 +0000 (0:00:01.147) 0:36:05.543 ******* 2026-02-20 05:32:11.348001 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.348012 | orchestrator | 2026-02-20 05:32:11.348022 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 05:32:11.348033 | orchestrator | Friday 20 February 2026 05:31:59 +0000 (0:00:01.358) 0:36:06.902 ******* 2026-02-20 05:32:11.348044 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:11.348055 | orchestrator | 2026-02-20 05:32:11.348066 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 05:32:11.348076 | orchestrator | Friday 20 February 2026 05:32:01 +0000 (0:00:01.906) 0:36:08.808 ******* 2026-02-20 05:32:11.348087 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:11.348098 | orchestrator | 2026-02-20 05:32:11.348109 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 05:32:11.348140 | orchestrator | Friday 20 February 2026 05:32:03 +0000 (0:00:02.227) 0:36:11.036 ******* 2026-02-20 05:32:11.348152 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-20 05:32:11.348163 | orchestrator | 2026-02-20 05:32:11.348173 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-20 05:32:11.348185 | orchestrator | Friday 20 February 2026 05:32:04 +0000 (0:00:01.105) 0:36:12.142 ******* 2026-02-20 05:32:11.348195 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.348206 | orchestrator | 2026-02-20 05:32:11.348217 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-20 05:32:11.348228 | orchestrator | Friday 20 February 2026 05:32:05 +0000 (0:00:01.139) 0:36:13.281 ******* 2026-02-20 05:32:11.348239 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.348250 | orchestrator | 2026-02-20 05:32:11.348261 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-20 05:32:11.348272 | orchestrator | Friday 20 February 2026 05:32:06 +0000 (0:00:01.122) 0:36:14.404 ******* 2026-02-20 05:32:11.348283 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 05:32:11.348294 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 05:32:11.348312 | orchestrator | 2026-02-20 05:32:11.348323 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-20 05:32:11.348334 | orchestrator | Friday 20 February 2026 05:32:08 +0000 (0:00:01.803) 0:36:16.208 ******* 2026-02-20 05:32:11.348345 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:11.348356 | orchestrator | 2026-02-20 05:32:11.348366 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-20 05:32:11.348377 | orchestrator | Friday 20 February 2026 05:32:10 +0000 (0:00:01.474) 0:36:17.683 ******* 2026-02-20 05:32:11.348388 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:11.348399 | orchestrator | 2026-02-20 05:32:11.348410 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-20 05:32:11.348428 | orchestrator | Friday 20 February 2026 05:32:11 +0000 (0:00:01.132) 0:36:18.815 ******* 2026-02-20 05:32:58.261919 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.262116 | orchestrator | 2026-02-20 05:32:58.262172 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 05:32:58.262186 | orchestrator | Friday 20 February 2026 05:32:12 +0000 (0:00:01.138) 0:36:19.954 ******* 2026-02-20 05:32:58.262196 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.262206 | orchestrator | 2026-02-20 05:32:58.262217 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 05:32:58.262228 | orchestrator | Friday 20 February 2026 05:32:13 +0000 (0:00:01.114) 0:36:21.068 ******* 2026-02-20 05:32:58.262242 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-20 05:32:58.262260 | orchestrator | 2026-02-20 05:32:58.262277 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-20 05:32:58.262293 | orchestrator | Friday 20 February 2026 05:32:14 +0000 (0:00:01.212) 0:36:22.281 ******* 2026-02-20 05:32:58.262309 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:58.262326 | orchestrator | 2026-02-20 05:32:58.262341 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-20 05:32:58.262359 | orchestrator | Friday 20 February 2026 05:32:16 +0000 (0:00:01.810) 0:36:24.091 ******* 2026-02-20 05:32:58.262375 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 05:32:58.262393 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 05:32:58.262411 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 05:32:58.262430 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.262443 | orchestrator | 2026-02-20 05:32:58.262455 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-20 05:32:58.262466 | orchestrator | Friday 20 February 2026 05:32:17 +0000 (0:00:01.145) 0:36:25.237 ******* 2026-02-20 05:32:58.262478 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.262489 | orchestrator | 2026-02-20 05:32:58.262500 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-20 05:32:58.262512 | orchestrator | Friday 20 February 2026 05:32:18 +0000 (0:00:01.129) 0:36:26.366 ******* 2026-02-20 05:32:58.262523 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.262535 | orchestrator | 2026-02-20 05:32:58.262546 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-20 05:32:58.262558 | orchestrator | Friday 20 February 2026 05:32:20 +0000 (0:00:01.184) 0:36:27.551 ******* 2026-02-20 05:32:58.262569 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.262580 | orchestrator | 2026-02-20 05:32:58.262591 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-20 05:32:58.262603 | orchestrator | Friday 20 February 2026 05:32:21 +0000 (0:00:01.125) 0:36:28.676 ******* 2026-02-20 05:32:58.262614 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.262626 | orchestrator | 2026-02-20 05:32:58.262637 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-20 05:32:58.262649 | orchestrator | Friday 20 February 2026 05:32:22 +0000 (0:00:01.179) 0:36:29.855 ******* 2026-02-20 05:32:58.262686 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.262697 | orchestrator | 2026-02-20 05:32:58.262709 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 05:32:58.262720 | orchestrator | Friday 20 February 2026 05:32:23 +0000 (0:00:01.205) 0:36:31.061 ******* 2026-02-20 05:32:58.262730 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:58.262740 | orchestrator | 2026-02-20 05:32:58.262750 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 05:32:58.262759 | orchestrator | Friday 20 February 2026 05:32:26 +0000 (0:00:02.540) 0:36:33.602 ******* 2026-02-20 05:32:58.262769 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:58.262779 | orchestrator | 2026-02-20 05:32:58.262789 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 05:32:58.262799 | orchestrator | Friday 20 February 2026 05:32:27 +0000 (0:00:01.113) 0:36:34.715 ******* 2026-02-20 05:32:58.262809 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-20 05:32:58.262818 | orchestrator | 2026-02-20 05:32:58.262828 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-20 05:32:58.262838 | orchestrator | Friday 20 February 2026 05:32:28 +0000 (0:00:01.109) 0:36:35.825 ******* 2026-02-20 05:32:58.262847 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.262857 | orchestrator | 2026-02-20 05:32:58.262867 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-20 05:32:58.262877 | orchestrator | Friday 20 February 2026 05:32:29 +0000 (0:00:01.158) 0:36:36.984 ******* 2026-02-20 05:32:58.262886 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.262896 | orchestrator | 2026-02-20 05:32:58.262906 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-20 05:32:58.262916 | orchestrator | Friday 20 February 2026 05:32:30 +0000 (0:00:01.178) 0:36:38.162 ******* 2026-02-20 05:32:58.262925 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.262935 | orchestrator | 2026-02-20 05:32:58.262945 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-20 05:32:58.262955 | orchestrator | Friday 20 February 2026 05:32:31 +0000 (0:00:01.125) 0:36:39.287 ******* 2026-02-20 05:32:58.262964 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.262974 | orchestrator | 2026-02-20 05:32:58.262984 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-20 05:32:58.262994 | orchestrator | Friday 20 February 2026 05:32:32 +0000 (0:00:01.111) 0:36:40.399 ******* 2026-02-20 05:32:58.263004 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.263013 | orchestrator | 2026-02-20 05:32:58.263023 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-20 05:32:58.263033 | orchestrator | Friday 20 February 2026 05:32:34 +0000 (0:00:01.131) 0:36:41.530 ******* 2026-02-20 05:32:58.263043 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.263053 | orchestrator | 2026-02-20 05:32:58.263080 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-20 05:32:58.263091 | orchestrator | Friday 20 February 2026 05:32:35 +0000 (0:00:01.131) 0:36:42.662 ******* 2026-02-20 05:32:58.263100 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.263110 | orchestrator | 2026-02-20 05:32:58.263120 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-20 05:32:58.263225 | orchestrator | Friday 20 February 2026 05:32:36 +0000 (0:00:01.125) 0:36:43.788 ******* 2026-02-20 05:32:58.263235 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.263245 | orchestrator | 2026-02-20 05:32:58.263255 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-20 05:32:58.263264 | orchestrator | Friday 20 February 2026 05:32:37 +0000 (0:00:01.166) 0:36:44.955 ******* 2026-02-20 05:32:58.263274 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:32:58.263283 | orchestrator | 2026-02-20 05:32:58.263332 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 05:32:58.263352 | orchestrator | Friday 20 February 2026 05:32:38 +0000 (0:00:01.194) 0:36:46.149 ******* 2026-02-20 05:32:58.263362 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-20 05:32:58.263372 | orchestrator | 2026-02-20 05:32:58.263382 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-20 05:32:58.263392 | orchestrator | Friday 20 February 2026 05:32:39 +0000 (0:00:01.118) 0:36:47.268 ******* 2026-02-20 05:32:58.263406 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-20 05:32:58.263416 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-20 05:32:58.263426 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-20 05:32:58.263436 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-20 05:32:58.263445 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-20 05:32:58.263455 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-20 05:32:58.263464 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-20 05:32:58.263474 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-20 05:32:58.263484 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 05:32:58.263493 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 05:32:58.263503 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 05:32:58.263513 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 05:32:58.263522 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 05:32:58.263532 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 05:32:58.263542 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-20 05:32:58.263551 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-20 05:32:58.263561 | orchestrator | 2026-02-20 05:32:58.263570 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 05:32:58.263580 | orchestrator | Friday 20 February 2026 05:32:46 +0000 (0:00:06.711) 0:36:53.980 ******* 2026-02-20 05:32:58.263590 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-20 05:32:58.263600 | orchestrator | 2026-02-20 05:32:58.263609 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-20 05:32:58.263619 | orchestrator | Friday 20 February 2026 05:32:48 +0000 (0:00:01.538) 0:36:55.518 ******* 2026-02-20 05:32:58.263629 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 05:32:58.263639 | orchestrator | 2026-02-20 05:32:58.263649 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-20 05:32:58.263659 | orchestrator | Friday 20 February 2026 05:32:49 +0000 (0:00:01.518) 0:36:57.036 ******* 2026-02-20 05:32:58.263668 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 05:32:58.263678 | orchestrator | 2026-02-20 05:32:58.263688 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 05:32:58.263697 | orchestrator | Friday 20 February 2026 05:32:51 +0000 (0:00:01.978) 0:36:59.015 ******* 2026-02-20 05:32:58.263707 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.263717 | orchestrator | 2026-02-20 05:32:58.263726 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 05:32:58.263736 | orchestrator | Friday 20 February 2026 05:32:52 +0000 (0:00:01.148) 0:37:00.164 ******* 2026-02-20 05:32:58.263745 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.263755 | orchestrator | 2026-02-20 05:32:58.263765 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 05:32:58.263779 | orchestrator | Friday 20 February 2026 05:32:53 +0000 (0:00:01.113) 0:37:01.278 ******* 2026-02-20 05:32:58.263804 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.263828 | orchestrator | 2026-02-20 05:32:58.263846 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 05:32:58.263862 | orchestrator | Friday 20 February 2026 05:32:54 +0000 (0:00:01.104) 0:37:02.382 ******* 2026-02-20 05:32:58.263879 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.263895 | orchestrator | 2026-02-20 05:32:58.263910 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 05:32:58.263924 | orchestrator | Friday 20 February 2026 05:32:56 +0000 (0:00:01.111) 0:37:03.494 ******* 2026-02-20 05:32:58.263939 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.263955 | orchestrator | 2026-02-20 05:32:58.263972 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 05:32:58.263988 | orchestrator | Friday 20 February 2026 05:32:57 +0000 (0:00:01.113) 0:37:04.607 ******* 2026-02-20 05:32:58.264006 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:32:58.264024 | orchestrator | 2026-02-20 05:32:58.264052 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 05:33:48.659630 | orchestrator | Friday 20 February 2026 05:32:58 +0000 (0:00:01.121) 0:37:05.729 ******* 2026-02-20 05:33:48.659743 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:33:48.659762 | orchestrator | 2026-02-20 05:33:48.659775 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 05:33:48.659788 | orchestrator | Friday 20 February 2026 05:32:59 +0000 (0:00:01.100) 0:37:06.830 ******* 2026-02-20 05:33:48.659800 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:33:48.659811 | orchestrator | 2026-02-20 05:33:48.659823 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 05:33:48.659834 | orchestrator | Friday 20 February 2026 05:33:00 +0000 (0:00:01.093) 0:37:07.923 ******* 2026-02-20 05:33:48.659845 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:33:48.659857 | orchestrator | 2026-02-20 05:33:48.659868 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 05:33:48.659879 | orchestrator | Friday 20 February 2026 05:33:01 +0000 (0:00:01.086) 0:37:09.009 ******* 2026-02-20 05:33:48.659890 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:33:48.659901 | orchestrator | 2026-02-20 05:33:48.659913 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 05:33:48.659940 | orchestrator | Friday 20 February 2026 05:33:02 +0000 (0:00:01.204) 0:37:10.215 ******* 2026-02-20 05:33:48.659952 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:33:48.659964 | orchestrator | 2026-02-20 05:33:48.659975 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 05:33:48.659986 | orchestrator | Friday 20 February 2026 05:33:03 +0000 (0:00:01.193) 0:37:11.408 ******* 2026-02-20 05:33:48.659997 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-20 05:33:48.660008 | orchestrator | 2026-02-20 05:33:48.660019 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 05:33:48.660031 | orchestrator | Friday 20 February 2026 05:33:08 +0000 (0:00:04.784) 0:37:16.193 ******* 2026-02-20 05:33:48.660043 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 05:33:48.660064 | orchestrator | 2026-02-20 05:33:48.660081 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 05:33:48.660100 | orchestrator | Friday 20 February 2026 05:33:09 +0000 (0:00:01.212) 0:37:17.406 ******* 2026-02-20 05:33:48.660120 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-20 05:33:48.660197 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-20 05:33:48.660221 | orchestrator | 2026-02-20 05:33:48.660242 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 05:33:48.660263 | orchestrator | Friday 20 February 2026 05:33:18 +0000 (0:00:08.119) 0:37:25.525 ******* 2026-02-20 05:33:48.660284 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:33:48.660304 | orchestrator | 2026-02-20 05:33:48.660324 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 05:33:48.660346 | orchestrator | Friday 20 February 2026 05:33:19 +0000 (0:00:01.100) 0:37:26.625 ******* 2026-02-20 05:33:48.660366 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:33:48.660380 | orchestrator | 2026-02-20 05:33:48.660392 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:33:48.660473 | orchestrator | Friday 20 February 2026 05:33:20 +0000 (0:00:01.110) 0:37:27.736 ******* 2026-02-20 05:33:48.660487 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:33:48.660498 | orchestrator | 2026-02-20 05:33:48.660509 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:33:48.660520 | orchestrator | Friday 20 February 2026 05:33:21 +0000 (0:00:01.137) 0:37:28.874 ******* 2026-02-20 05:33:48.660531 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:33:48.660542 | orchestrator | 2026-02-20 05:33:48.660553 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:33:48.660564 | orchestrator | Friday 20 February 2026 05:33:22 +0000 (0:00:01.146) 0:37:30.020 ******* 2026-02-20 05:33:48.660575 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:33:48.660586 | orchestrator | 2026-02-20 05:33:48.660597 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:33:48.660608 | orchestrator | Friday 20 February 2026 05:33:23 +0000 (0:00:01.166) 0:37:31.186 ******* 2026-02-20 05:33:48.660619 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:33:48.660630 | orchestrator | 2026-02-20 05:33:48.660642 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:33:48.660663 | orchestrator | Friday 20 February 2026 05:33:24 +0000 (0:00:01.217) 0:37:32.404 ******* 2026-02-20 05:33:48.660729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 05:33:48.660752 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 05:33:48.660770 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 05:33:48.660791 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:33:48.660842 | orchestrator | 2026-02-20 05:33:48.660855 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:33:48.660888 | orchestrator | Friday 20 February 2026 05:33:26 +0000 (0:00:01.410) 0:37:33.814 ******* 2026-02-20 05:33:48.660900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 05:33:48.660911 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 05:33:48.660922 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 05:33:48.660933 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:33:48.660944 | orchestrator | 2026-02-20 05:33:48.660955 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:33:48.660966 | orchestrator | Friday 20 February 2026 05:33:28 +0000 (0:00:01.724) 0:37:35.538 ******* 2026-02-20 05:33:48.660977 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 05:33:48.660988 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 05:33:48.660999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 05:33:48.661010 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:33:48.661034 | orchestrator | 2026-02-20 05:33:48.661046 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:33:48.661057 | orchestrator | Friday 20 February 2026 05:33:29 +0000 (0:00:01.713) 0:37:37.252 ******* 2026-02-20 05:33:48.661068 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:33:48.661079 | orchestrator | 2026-02-20 05:33:48.661099 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:33:48.661110 | orchestrator | Friday 20 February 2026 05:33:31 +0000 (0:00:01.256) 0:37:38.509 ******* 2026-02-20 05:33:48.661121 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-20 05:33:48.661132 | orchestrator | 2026-02-20 05:33:48.661168 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 05:33:48.661181 | orchestrator | Friday 20 February 2026 05:33:32 +0000 (0:00:01.332) 0:37:39.842 ******* 2026-02-20 05:33:48.661193 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:33:48.661204 | orchestrator | 2026-02-20 05:33:48.661215 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-20 05:33:48.661226 | orchestrator | Friday 20 February 2026 05:33:34 +0000 (0:00:01.755) 0:37:41.597 ******* 2026-02-20 05:33:48.661237 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:33:48.661248 | orchestrator | 2026-02-20 05:33:48.661259 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-20 05:33:48.661270 | orchestrator | Friday 20 February 2026 05:33:35 +0000 (0:00:01.120) 0:37:42.717 ******* 2026-02-20 05:33:48.661281 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:33:48.661293 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:33:48.661304 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:33:48.661314 | orchestrator | 2026-02-20 05:33:48.661325 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-20 05:33:48.661336 | orchestrator | Friday 20 February 2026 05:33:36 +0000 (0:00:01.623) 0:37:44.340 ******* 2026-02-20 05:33:48.661347 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-02-20 05:33:48.661358 | orchestrator | 2026-02-20 05:33:48.661369 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-20 05:33:48.661380 | orchestrator | Friday 20 February 2026 05:33:38 +0000 (0:00:01.458) 0:37:45.798 ******* 2026-02-20 05:33:48.661391 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:33:48.661402 | orchestrator | 2026-02-20 05:33:48.661413 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-20 05:33:48.661424 | orchestrator | Friday 20 February 2026 05:33:39 +0000 (0:00:00.909) 0:37:46.708 ******* 2026-02-20 05:33:48.661435 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:33:48.661446 | orchestrator | 2026-02-20 05:33:48.661457 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-20 05:33:48.661468 | orchestrator | Friday 20 February 2026 05:33:40 +0000 (0:00:00.961) 0:37:47.670 ******* 2026-02-20 05:33:48.661479 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:33:48.661490 | orchestrator | 2026-02-20 05:33:48.661501 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-20 05:33:48.661512 | orchestrator | Friday 20 February 2026 05:33:41 +0000 (0:00:01.420) 0:37:49.090 ******* 2026-02-20 05:33:48.661523 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:33:48.661534 | orchestrator | 2026-02-20 05:33:48.661545 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-20 05:33:48.661556 | orchestrator | Friday 20 February 2026 05:33:42 +0000 (0:00:01.106) 0:37:50.197 ******* 2026-02-20 05:33:48.661567 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-20 05:33:48.661578 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-20 05:33:48.661589 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-20 05:33:48.661607 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-20 05:33:48.661619 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-20 05:33:48.661638 | orchestrator | 2026-02-20 05:33:48.661657 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-20 05:33:48.661675 | orchestrator | Friday 20 February 2026 05:33:46 +0000 (0:00:03.331) 0:37:53.528 ******* 2026-02-20 05:33:48.661693 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:33:48.661713 | orchestrator | 2026-02-20 05:33:48.661733 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-20 05:33:48.661752 | orchestrator | Friday 20 February 2026 05:33:47 +0000 (0:00:01.096) 0:37:54.625 ******* 2026-02-20 05:33:48.661771 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-02-20 05:33:48.661782 | orchestrator | 2026-02-20 05:33:48.661793 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-20 05:34:56.912625 | orchestrator | Friday 20 February 2026 05:33:48 +0000 (0:00:01.505) 0:37:56.130 ******* 2026-02-20 05:34:56.912719 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-20 05:34:56.912732 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-20 05:34:56.912743 | orchestrator | 2026-02-20 05:34:56.912753 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-20 05:34:56.912763 | orchestrator | Friday 20 February 2026 05:33:50 +0000 (0:00:01.825) 0:37:57.956 ******* 2026-02-20 05:34:56.912772 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 05:34:56.912780 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-20 05:34:56.912789 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 05:34:56.912798 | orchestrator | 2026-02-20 05:34:56.912807 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-20 05:34:56.912816 | orchestrator | Friday 20 February 2026 05:33:53 +0000 (0:00:03.336) 0:38:01.292 ******* 2026-02-20 05:34:56.912824 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-20 05:34:56.912833 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-20 05:34:56.912842 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:34:56.912851 | orchestrator | 2026-02-20 05:34:56.912871 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-20 05:34:56.912880 | orchestrator | Friday 20 February 2026 05:33:55 +0000 (0:00:01.998) 0:38:03.290 ******* 2026-02-20 05:34:56.912889 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:34:56.912898 | orchestrator | 2026-02-20 05:34:56.912906 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-20 05:34:56.912915 | orchestrator | Friday 20 February 2026 05:33:57 +0000 (0:00:01.223) 0:38:04.514 ******* 2026-02-20 05:34:56.912923 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:34:56.912932 | orchestrator | 2026-02-20 05:34:56.912941 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-20 05:34:56.912950 | orchestrator | Friday 20 February 2026 05:33:58 +0000 (0:00:01.103) 0:38:05.618 ******* 2026-02-20 05:34:56.912958 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:34:56.912967 | orchestrator | 2026-02-20 05:34:56.912976 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-20 05:34:56.912984 | orchestrator | Friday 20 February 2026 05:33:59 +0000 (0:00:01.114) 0:38:06.733 ******* 2026-02-20 05:34:56.912993 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-02-20 05:34:56.913002 | orchestrator | 2026-02-20 05:34:56.913011 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-20 05:34:56.913020 | orchestrator | Friday 20 February 2026 05:34:00 +0000 (0:00:01.472) 0:38:08.205 ******* 2026-02-20 05:34:56.913028 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:34:56.913037 | orchestrator | 2026-02-20 05:34:56.913046 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-20 05:34:56.913072 | orchestrator | Friday 20 February 2026 05:34:02 +0000 (0:00:01.475) 0:38:09.681 ******* 2026-02-20 05:34:56.913081 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:34:56.913090 | orchestrator | 2026-02-20 05:34:56.913098 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-20 05:34:56.913107 | orchestrator | Friday 20 February 2026 05:34:06 +0000 (0:00:04.004) 0:38:13.686 ******* 2026-02-20 05:34:56.913116 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-02-20 05:34:56.913124 | orchestrator | 2026-02-20 05:34:56.913133 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-20 05:34:56.913141 | orchestrator | Friday 20 February 2026 05:34:07 +0000 (0:00:01.515) 0:38:15.201 ******* 2026-02-20 05:34:56.913150 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:34:56.913158 | orchestrator | 2026-02-20 05:34:56.913195 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-20 05:34:56.913206 | orchestrator | Friday 20 February 2026 05:34:09 +0000 (0:00:01.987) 0:38:17.189 ******* 2026-02-20 05:34:56.913215 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:34:56.913225 | orchestrator | 2026-02-20 05:34:56.913235 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-20 05:34:56.913245 | orchestrator | Friday 20 February 2026 05:34:11 +0000 (0:00:01.981) 0:38:19.171 ******* 2026-02-20 05:34:56.913255 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:34:56.913264 | orchestrator | 2026-02-20 05:34:56.913275 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-20 05:34:56.913285 | orchestrator | Friday 20 February 2026 05:34:13 +0000 (0:00:02.269) 0:38:21.441 ******* 2026-02-20 05:34:56.913295 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:34:56.913305 | orchestrator | 2026-02-20 05:34:56.913315 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-20 05:34:56.913326 | orchestrator | Friday 20 February 2026 05:34:15 +0000 (0:00:01.131) 0:38:22.572 ******* 2026-02-20 05:34:56.913335 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:34:56.913345 | orchestrator | 2026-02-20 05:34:56.913355 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-20 05:34:56.913365 | orchestrator | Friday 20 February 2026 05:34:16 +0000 (0:00:01.159) 0:38:23.732 ******* 2026-02-20 05:34:56.913375 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-20 05:34:56.913385 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-20 05:34:56.913395 | orchestrator | 2026-02-20 05:34:56.913405 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-20 05:34:56.913415 | orchestrator | Friday 20 February 2026 05:34:18 +0000 (0:00:01.843) 0:38:25.575 ******* 2026-02-20 05:34:56.913424 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-20 05:34:56.913434 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-20 05:34:56.913445 | orchestrator | 2026-02-20 05:34:56.913455 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-20 05:34:56.913464 | orchestrator | Friday 20 February 2026 05:34:21 +0000 (0:00:02.920) 0:38:28.496 ******* 2026-02-20 05:34:56.913474 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-20 05:34:56.913497 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-20 05:34:56.913506 | orchestrator | 2026-02-20 05:34:56.913515 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-20 05:34:56.913524 | orchestrator | Friday 20 February 2026 05:34:25 +0000 (0:00:04.772) 0:38:33.268 ******* 2026-02-20 05:34:56.913540 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:34:56.913554 | orchestrator | 2026-02-20 05:34:56.913568 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-20 05:34:56.913582 | orchestrator | Friday 20 February 2026 05:34:27 +0000 (0:00:01.232) 0:38:34.501 ******* 2026-02-20 05:34:56.913597 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:34:56.913613 | orchestrator | 2026-02-20 05:34:56.913627 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-20 05:34:56.913653 | orchestrator | Friday 20 February 2026 05:34:28 +0000 (0:00:01.212) 0:38:35.713 ******* 2026-02-20 05:34:56.913668 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:34:56.913684 | orchestrator | 2026-02-20 05:34:56.913694 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-20 05:34:56.913702 | orchestrator | Friday 20 February 2026 05:34:29 +0000 (0:00:01.601) 0:38:37.315 ******* 2026-02-20 05:34:56.913711 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:34:56.913720 | orchestrator | 2026-02-20 05:34:56.913734 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-20 05:34:56.913743 | orchestrator | Friday 20 February 2026 05:34:30 +0000 (0:00:01.145) 0:38:38.460 ******* 2026-02-20 05:34:56.913752 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:34:56.913761 | orchestrator | 2026-02-20 05:34:56.913769 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-20 05:34:56.913778 | orchestrator | Friday 20 February 2026 05:34:32 +0000 (0:00:01.107) 0:38:39.568 ******* 2026-02-20 05:34:56.913787 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-20 05:34:56.913796 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-20 05:34:56.913805 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-20 05:34:56.913814 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:34:56.913823 | orchestrator | 2026-02-20 05:34:56.913831 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-20 05:34:56.913840 | orchestrator | 2026-02-20 05:34:56.913849 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:34:56.913857 | orchestrator | Friday 20 February 2026 05:34:43 +0000 (0:00:11.084) 0:38:50.653 ******* 2026-02-20 05:34:56.913871 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-20 05:34:56.913884 | orchestrator | 2026-02-20 05:34:56.913908 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 05:34:56.913922 | orchestrator | Friday 20 February 2026 05:34:44 +0000 (0:00:01.100) 0:38:51.753 ******* 2026-02-20 05:34:56.913936 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:34:56.913949 | orchestrator | 2026-02-20 05:34:56.913961 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 05:34:56.913975 | orchestrator | Friday 20 February 2026 05:34:45 +0000 (0:00:01.461) 0:38:53.215 ******* 2026-02-20 05:34:56.913989 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:34:56.914002 | orchestrator | 2026-02-20 05:34:56.914062 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:34:56.914081 | orchestrator | Friday 20 February 2026 05:34:46 +0000 (0:00:01.119) 0:38:54.335 ******* 2026-02-20 05:34:56.914096 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:34:56.914106 | orchestrator | 2026-02-20 05:34:56.914114 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:34:56.914123 | orchestrator | Friday 20 February 2026 05:34:48 +0000 (0:00:01.459) 0:38:55.794 ******* 2026-02-20 05:34:56.914132 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:34:56.914141 | orchestrator | 2026-02-20 05:34:56.914149 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 05:34:56.914158 | orchestrator | Friday 20 February 2026 05:34:49 +0000 (0:00:01.103) 0:38:56.897 ******* 2026-02-20 05:34:56.914189 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:34:56.914198 | orchestrator | 2026-02-20 05:34:56.914207 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 05:34:56.914216 | orchestrator | Friday 20 February 2026 05:34:50 +0000 (0:00:01.154) 0:38:58.052 ******* 2026-02-20 05:34:56.914225 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:34:56.914233 | orchestrator | 2026-02-20 05:34:56.914242 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 05:34:56.914260 | orchestrator | Friday 20 February 2026 05:34:51 +0000 (0:00:01.135) 0:38:59.187 ******* 2026-02-20 05:34:56.914268 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:34:56.914277 | orchestrator | 2026-02-20 05:34:56.914286 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 05:34:56.914297 | orchestrator | Friday 20 February 2026 05:34:52 +0000 (0:00:01.110) 0:39:00.298 ******* 2026-02-20 05:34:56.914312 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:34:56.914326 | orchestrator | 2026-02-20 05:34:56.914340 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 05:34:56.914355 | orchestrator | Friday 20 February 2026 05:34:53 +0000 (0:00:01.112) 0:39:01.411 ******* 2026-02-20 05:34:56.914369 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:34:56.914384 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:34:56.914399 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:34:56.914413 | orchestrator | 2026-02-20 05:34:56.914428 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 05:34:56.914439 | orchestrator | Friday 20 February 2026 05:34:55 +0000 (0:00:01.737) 0:39:03.148 ******* 2026-02-20 05:34:56.914458 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:35:20.463098 | orchestrator | 2026-02-20 05:35:20.463275 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 05:35:20.463295 | orchestrator | Friday 20 February 2026 05:34:56 +0000 (0:00:01.236) 0:39:04.384 ******* 2026-02-20 05:35:20.463302 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:35:20.463310 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:35:20.463317 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:35:20.463324 | orchestrator | 2026-02-20 05:35:20.463331 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 05:35:20.463338 | orchestrator | Friday 20 February 2026 05:34:59 +0000 (0:00:02.877) 0:39:07.262 ******* 2026-02-20 05:35:20.463346 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-20 05:35:20.463353 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-20 05:35:20.463360 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-20 05:35:20.463367 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:35:20.463374 | orchestrator | 2026-02-20 05:35:20.463395 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 05:35:20.463403 | orchestrator | Friday 20 February 2026 05:35:01 +0000 (0:00:01.479) 0:39:08.741 ******* 2026-02-20 05:35:20.463411 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 05:35:20.463420 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 05:35:20.463427 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 05:35:20.463435 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:35:20.463442 | orchestrator | 2026-02-20 05:35:20.463449 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 05:35:20.463456 | orchestrator | Friday 20 February 2026 05:35:02 +0000 (0:00:01.634) 0:39:10.376 ******* 2026-02-20 05:35:20.463465 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:20.463498 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:20.463506 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:20.463513 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:35:20.463521 | orchestrator | 2026-02-20 05:35:20.463527 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 05:35:20.463534 | orchestrator | Friday 20 February 2026 05:35:04 +0000 (0:00:01.156) 0:39:11.532 ******* 2026-02-20 05:35:20.463558 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'fa7fdf54d9d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 05:34:57.435762', 'end': '2026-02-20 05:34:57.498813', 'delta': '0:00:00.063051', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fa7fdf54d9d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 05:35:20.463568 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '9edb2d04dfb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 05:34:58.010085', 'end': '2026-02-20 05:34:58.059923', 'delta': '0:00:00.049838', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9edb2d04dfb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 05:35:20.463585 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '18841505eff4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 05:34:58.589343', 'end': '2026-02-20 05:34:58.636991', 'delta': '0:00:00.047648', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['18841505eff4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 05:35:20.463597 | orchestrator | 2026-02-20 05:35:20.463608 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 05:35:20.463629 | orchestrator | Friday 20 February 2026 05:35:05 +0000 (0:00:01.196) 0:39:12.729 ******* 2026-02-20 05:35:20.463642 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:35:20.463654 | orchestrator | 2026-02-20 05:35:20.463666 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 05:35:20.463675 | orchestrator | Friday 20 February 2026 05:35:06 +0000 (0:00:01.218) 0:39:13.947 ******* 2026-02-20 05:35:20.463683 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:35:20.463691 | orchestrator | 2026-02-20 05:35:20.463699 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 05:35:20.463706 | orchestrator | Friday 20 February 2026 05:35:07 +0000 (0:00:01.225) 0:39:15.173 ******* 2026-02-20 05:35:20.463714 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:35:20.463722 | orchestrator | 2026-02-20 05:35:20.463729 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 05:35:20.463737 | orchestrator | Friday 20 February 2026 05:35:08 +0000 (0:00:01.151) 0:39:16.325 ******* 2026-02-20 05:35:20.463745 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:35:20.463752 | orchestrator | 2026-02-20 05:35:20.463760 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:35:20.463768 | orchestrator | Friday 20 February 2026 05:35:11 +0000 (0:00:02.428) 0:39:18.754 ******* 2026-02-20 05:35:20.463775 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:35:20.463783 | orchestrator | 2026-02-20 05:35:20.463790 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 05:35:20.463798 | orchestrator | Friday 20 February 2026 05:35:12 +0000 (0:00:01.135) 0:39:19.890 ******* 2026-02-20 05:35:20.463806 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:35:20.463813 | orchestrator | 2026-02-20 05:35:20.463821 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 05:35:20.463828 | orchestrator | Friday 20 February 2026 05:35:13 +0000 (0:00:01.186) 0:39:21.076 ******* 2026-02-20 05:35:20.463836 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:35:20.463843 | orchestrator | 2026-02-20 05:35:20.463851 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:35:20.463859 | orchestrator | Friday 20 February 2026 05:35:14 +0000 (0:00:01.200) 0:39:22.277 ******* 2026-02-20 05:35:20.463867 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:35:20.463874 | orchestrator | 2026-02-20 05:35:20.463882 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 05:35:20.463890 | orchestrator | Friday 20 February 2026 05:35:15 +0000 (0:00:01.126) 0:39:23.404 ******* 2026-02-20 05:35:20.463898 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:35:20.463906 | orchestrator | 2026-02-20 05:35:20.463913 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 05:35:20.463921 | orchestrator | Friday 20 February 2026 05:35:17 +0000 (0:00:01.104) 0:39:24.509 ******* 2026-02-20 05:35:20.463929 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:35:20.463937 | orchestrator | 2026-02-20 05:35:20.463944 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 05:35:20.463952 | orchestrator | Friday 20 February 2026 05:35:18 +0000 (0:00:01.163) 0:39:25.672 ******* 2026-02-20 05:35:20.463960 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:35:20.463967 | orchestrator | 2026-02-20 05:35:20.463975 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 05:35:20.463983 | orchestrator | Friday 20 February 2026 05:35:19 +0000 (0:00:01.102) 0:39:26.775 ******* 2026-02-20 05:35:20.463990 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:35:20.463996 | orchestrator | 2026-02-20 05:35:20.464018 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 05:35:20.464030 | orchestrator | Friday 20 February 2026 05:35:20 +0000 (0:00:01.157) 0:39:27.933 ******* 2026-02-20 05:35:22.969814 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:35:22.969971 | orchestrator | 2026-02-20 05:35:22.969999 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 05:35:22.970125 | orchestrator | Friday 20 February 2026 05:35:21 +0000 (0:00:01.122) 0:39:29.055 ******* 2026-02-20 05:35:22.970151 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:35:22.970170 | orchestrator | 2026-02-20 05:35:22.970242 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 05:35:22.970252 | orchestrator | Friday 20 February 2026 05:35:22 +0000 (0:00:01.161) 0:39:30.216 ******* 2026-02-20 05:35:22.970264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:35:22.970294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd', 'dm-uuid-LVM-uad2V0OpLGdzdU3eEHWgkaxlNp1nXa7g5o0axnC3QdSAlB8AkjlVWSH5X44uoXlU'], 'uuids': ['931641c7-2345-4218-a67b-b8fcf36da2a6'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '528e4f8d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU']}})  2026-02-20 05:35:22.970310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca', 'scsi-SQEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f09aecfd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:35:22.970323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UBLLvV-8QQ0-xzoQ-2mnT-QJVN-4rdp-m3Ln3u', 'scsi-0QEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6', 'scsi-SQEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6dc65ba2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef']}})  2026-02-20 05:35:22.970335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:35:22.970347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:35:22.970381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 05:35:22.970403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:35:22.970422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T', 'dm-uuid-CRYPT-LUKS2-7f9663ba9e0d48338edb558cf7968427-GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 05:35:22.970435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:35:22.970446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef', 'dm-uuid-LVM-eZPdyLEJUA7yM8UG04kzUJPVXCx8J9VmGmbUf4SnTTCksjLBnYIEkOOeXJAJtO0T'], 'uuids': ['7f9663ba-9e0d-4833-8edb-558cf7968427'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6dc65ba2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T']}})  2026-02-20 05:35:22.970458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2YdZI3-6CQC-uR2v-c72I-I2Jf-rZdP-m7eIYn', 'scsi-0QEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289', 'scsi-SQEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '528e4f8d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd']}})  2026-02-20 05:35:22.970471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:35:22.970511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '801ae611', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:35:24.277704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:35:24.277805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:35:24.277821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU', 'dm-uuid-CRYPT-LUKS2-931641c723454218a67bb8fcf36da2a6-5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 05:35:24.277835 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:35:24.277847 | orchestrator | 2026-02-20 05:35:24.277859 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 05:35:24.277870 | orchestrator | Friday 20 February 2026 05:35:24 +0000 (0:00:01.325) 0:39:31.542 ******* 2026-02-20 05:35:24.277882 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:24.277916 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd', 'dm-uuid-LVM-uad2V0OpLGdzdU3eEHWgkaxlNp1nXa7g5o0axnC3QdSAlB8AkjlVWSH5X44uoXlU'], 'uuids': ['931641c7-2345-4218-a67b-b8fcf36da2a6'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '528e4f8d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:24.277942 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca', 'scsi-SQEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f09aecfd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:24.277973 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UBLLvV-8QQ0-xzoQ-2mnT-QJVN-4rdp-m3Ln3u', 'scsi-0QEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6', 'scsi-SQEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6dc65ba2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:24.277986 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:24.278001 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:24.278104 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:24.278132 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:24.278162 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T', 'dm-uuid-CRYPT-LUKS2-7f9663ba9e0d48338edb558cf7968427-GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:29.537990 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:29.538153 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef', 'dm-uuid-LVM-eZPdyLEJUA7yM8UG04kzUJPVXCx8J9VmGmbUf4SnTTCksjLBnYIEkOOeXJAJtO0T'], 'uuids': ['7f9663ba-9e0d-4833-8edb-558cf7968427'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6dc65ba2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:29.538286 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2YdZI3-6CQC-uR2v-c72I-I2Jf-rZdP-m7eIYn', 'scsi-0QEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289', 'scsi-SQEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '528e4f8d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:29.538319 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:29.538352 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '801ae611', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:29.538387 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:29.538415 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:29.538439 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU', 'dm-uuid-CRYPT-LUKS2-931641c723454218a67bb8fcf36da2a6-5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:35:29.538457 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:35:29.538475 | orchestrator | 2026-02-20 05:35:29.538492 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 05:35:29.538511 | orchestrator | Friday 20 February 2026 05:35:25 +0000 (0:00:01.445) 0:39:32.988 ******* 2026-02-20 05:35:29.538528 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:35:29.538546 | orchestrator | 2026-02-20 05:35:29.538558 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 05:35:29.538569 | orchestrator | Friday 20 February 2026 05:35:26 +0000 (0:00:01.477) 0:39:34.465 ******* 2026-02-20 05:35:29.538581 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:35:29.538598 | orchestrator | 2026-02-20 05:35:29.538623 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:35:29.538641 | orchestrator | Friday 20 February 2026 05:35:28 +0000 (0:00:01.111) 0:39:35.577 ******* 2026-02-20 05:35:29.538657 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:35:29.538673 | orchestrator | 2026-02-20 05:35:29.538691 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:35:29.538719 | orchestrator | Friday 20 February 2026 05:35:29 +0000 (0:00:01.437) 0:39:37.014 ******* 2026-02-20 05:36:10.276652 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:10.276739 | orchestrator | 2026-02-20 05:36:10.276749 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:36:10.276756 | orchestrator | Friday 20 February 2026 05:35:30 +0000 (0:00:01.134) 0:39:38.149 ******* 2026-02-20 05:36:10.276762 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:10.276767 | orchestrator | 2026-02-20 05:36:10.276773 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:36:10.276778 | orchestrator | Friday 20 February 2026 05:35:31 +0000 (0:00:01.209) 0:39:39.358 ******* 2026-02-20 05:36:10.276801 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:10.276807 | orchestrator | 2026-02-20 05:36:10.276812 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 05:36:10.276818 | orchestrator | Friday 20 February 2026 05:35:32 +0000 (0:00:01.123) 0:39:40.481 ******* 2026-02-20 05:36:10.276824 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-20 05:36:10.276829 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-20 05:36:10.276835 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-20 05:36:10.276840 | orchestrator | 2026-02-20 05:36:10.276846 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 05:36:10.276854 | orchestrator | Friday 20 February 2026 05:35:34 +0000 (0:00:01.676) 0:39:42.158 ******* 2026-02-20 05:36:10.276863 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-20 05:36:10.276871 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-20 05:36:10.276879 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-20 05:36:10.276888 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:10.276896 | orchestrator | 2026-02-20 05:36:10.276905 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 05:36:10.276914 | orchestrator | Friday 20 February 2026 05:35:35 +0000 (0:00:01.127) 0:39:43.285 ******* 2026-02-20 05:36:10.276919 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-20 05:36:10.276925 | orchestrator | 2026-02-20 05:36:10.276931 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:36:10.276938 | orchestrator | Friday 20 February 2026 05:35:36 +0000 (0:00:01.103) 0:39:44.389 ******* 2026-02-20 05:36:10.276943 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:10.276948 | orchestrator | 2026-02-20 05:36:10.276953 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:36:10.276958 | orchestrator | Friday 20 February 2026 05:35:38 +0000 (0:00:01.155) 0:39:45.545 ******* 2026-02-20 05:36:10.276963 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:10.276969 | orchestrator | 2026-02-20 05:36:10.276974 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:36:10.276979 | orchestrator | Friday 20 February 2026 05:35:39 +0000 (0:00:01.126) 0:39:46.671 ******* 2026-02-20 05:36:10.276984 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:10.276989 | orchestrator | 2026-02-20 05:36:10.276994 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:36:10.277000 | orchestrator | Friday 20 February 2026 05:35:40 +0000 (0:00:01.167) 0:39:47.839 ******* 2026-02-20 05:36:10.277005 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:10.277010 | orchestrator | 2026-02-20 05:36:10.277016 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:36:10.277021 | orchestrator | Friday 20 February 2026 05:35:41 +0000 (0:00:01.212) 0:39:49.051 ******* 2026-02-20 05:36:10.277026 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 05:36:10.277031 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 05:36:10.277036 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 05:36:10.277041 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:10.277046 | orchestrator | 2026-02-20 05:36:10.277051 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:36:10.277057 | orchestrator | Friday 20 February 2026 05:35:42 +0000 (0:00:01.367) 0:39:50.419 ******* 2026-02-20 05:36:10.277062 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 05:36:10.277067 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 05:36:10.277072 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 05:36:10.277077 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:10.277082 | orchestrator | 2026-02-20 05:36:10.277103 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:36:10.277109 | orchestrator | Friday 20 February 2026 05:35:44 +0000 (0:00:01.372) 0:39:51.791 ******* 2026-02-20 05:36:10.277117 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 05:36:10.277126 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 05:36:10.277134 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 05:36:10.277142 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:10.277166 | orchestrator | 2026-02-20 05:36:10.277174 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:36:10.277182 | orchestrator | Friday 20 February 2026 05:35:45 +0000 (0:00:01.385) 0:39:53.177 ******* 2026-02-20 05:36:10.277232 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:10.277242 | orchestrator | 2026-02-20 05:36:10.277251 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:36:10.277260 | orchestrator | Friday 20 February 2026 05:35:46 +0000 (0:00:01.143) 0:39:54.321 ******* 2026-02-20 05:36:10.277269 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-20 05:36:10.277278 | orchestrator | 2026-02-20 05:36:10.277287 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 05:36:10.277294 | orchestrator | Friday 20 February 2026 05:35:48 +0000 (0:00:01.310) 0:39:55.631 ******* 2026-02-20 05:36:10.277312 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:36:10.277318 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:36:10.277323 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:36:10.277328 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:36:10.277333 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-20 05:36:10.277338 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:36:10.277343 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:36:10.277348 | orchestrator | 2026-02-20 05:36:10.277353 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 05:36:10.277358 | orchestrator | Friday 20 February 2026 05:35:49 +0000 (0:00:01.782) 0:39:57.414 ******* 2026-02-20 05:36:10.277364 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:36:10.277369 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:36:10.277374 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:36:10.277379 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:36:10.277384 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-20 05:36:10.277389 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:36:10.277394 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:36:10.277399 | orchestrator | 2026-02-20 05:36:10.277404 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-20 05:36:10.277409 | orchestrator | Friday 20 February 2026 05:35:52 +0000 (0:00:02.211) 0:39:59.625 ******* 2026-02-20 05:36:10.277414 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:10.277419 | orchestrator | 2026-02-20 05:36:10.277425 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-20 05:36:10.277430 | orchestrator | Friday 20 February 2026 05:35:53 +0000 (0:00:01.160) 0:40:00.786 ******* 2026-02-20 05:36:10.277435 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:10.277440 | orchestrator | 2026-02-20 05:36:10.277445 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-20 05:36:10.277450 | orchestrator | Friday 20 February 2026 05:35:54 +0000 (0:00:00.781) 0:40:01.567 ******* 2026-02-20 05:36:10.277462 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:10.277467 | orchestrator | 2026-02-20 05:36:10.277472 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-20 05:36:10.277477 | orchestrator | Friday 20 February 2026 05:35:54 +0000 (0:00:00.892) 0:40:02.460 ******* 2026-02-20 05:36:10.277482 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-20 05:36:10.277487 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-20 05:36:10.277492 | orchestrator | 2026-02-20 05:36:10.277498 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 05:36:10.277503 | orchestrator | Friday 20 February 2026 05:35:58 +0000 (0:00:03.805) 0:40:06.266 ******* 2026-02-20 05:36:10.277508 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-20 05:36:10.277513 | orchestrator | 2026-02-20 05:36:10.277518 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 05:36:10.277523 | orchestrator | Friday 20 February 2026 05:35:59 +0000 (0:00:01.177) 0:40:07.443 ******* 2026-02-20 05:36:10.277528 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-20 05:36:10.277533 | orchestrator | 2026-02-20 05:36:10.277538 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 05:36:10.277543 | orchestrator | Friday 20 February 2026 05:36:01 +0000 (0:00:01.110) 0:40:08.554 ******* 2026-02-20 05:36:10.277548 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:10.277554 | orchestrator | 2026-02-20 05:36:10.277559 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 05:36:10.277564 | orchestrator | Friday 20 February 2026 05:36:02 +0000 (0:00:01.127) 0:40:09.682 ******* 2026-02-20 05:36:10.277569 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:10.277574 | orchestrator | 2026-02-20 05:36:10.277584 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 05:36:10.277590 | orchestrator | Friday 20 February 2026 05:36:03 +0000 (0:00:01.523) 0:40:11.205 ******* 2026-02-20 05:36:10.277598 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:10.277607 | orchestrator | 2026-02-20 05:36:10.277615 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 05:36:10.277623 | orchestrator | Friday 20 February 2026 05:36:05 +0000 (0:00:01.571) 0:40:12.777 ******* 2026-02-20 05:36:10.277631 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:10.277640 | orchestrator | 2026-02-20 05:36:10.277648 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 05:36:10.277657 | orchestrator | Friday 20 February 2026 05:36:06 +0000 (0:00:01.561) 0:40:14.338 ******* 2026-02-20 05:36:10.277666 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:10.277675 | orchestrator | 2026-02-20 05:36:10.277681 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 05:36:10.277686 | orchestrator | Friday 20 February 2026 05:36:08 +0000 (0:00:01.149) 0:40:15.488 ******* 2026-02-20 05:36:10.277691 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:10.277696 | orchestrator | 2026-02-20 05:36:10.277701 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 05:36:10.277706 | orchestrator | Friday 20 February 2026 05:36:09 +0000 (0:00:01.138) 0:40:16.627 ******* 2026-02-20 05:36:10.277712 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:10.277717 | orchestrator | 2026-02-20 05:36:10.277726 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 05:36:51.016513 | orchestrator | Friday 20 February 2026 05:36:10 +0000 (0:00:01.119) 0:40:17.747 ******* 2026-02-20 05:36:51.016645 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:51.016668 | orchestrator | 2026-02-20 05:36:51.016683 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 05:36:51.016697 | orchestrator | Friday 20 February 2026 05:36:11 +0000 (0:00:01.502) 0:40:19.250 ******* 2026-02-20 05:36:51.016712 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:51.016751 | orchestrator | 2026-02-20 05:36:51.016765 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 05:36:51.016773 | orchestrator | Friday 20 February 2026 05:36:13 +0000 (0:00:01.554) 0:40:20.805 ******* 2026-02-20 05:36:51.016782 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.016791 | orchestrator | 2026-02-20 05:36:51.016799 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 05:36:51.016807 | orchestrator | Friday 20 February 2026 05:36:14 +0000 (0:00:00.758) 0:40:21.563 ******* 2026-02-20 05:36:51.016815 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.016823 | orchestrator | 2026-02-20 05:36:51.016831 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 05:36:51.016839 | orchestrator | Friday 20 February 2026 05:36:14 +0000 (0:00:00.831) 0:40:22.394 ******* 2026-02-20 05:36:51.016847 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:51.016855 | orchestrator | 2026-02-20 05:36:51.016867 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 05:36:51.016880 | orchestrator | Friday 20 February 2026 05:36:15 +0000 (0:00:00.846) 0:40:23.241 ******* 2026-02-20 05:36:51.016893 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:51.016906 | orchestrator | 2026-02-20 05:36:51.016920 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 05:36:51.016932 | orchestrator | Friday 20 February 2026 05:36:16 +0000 (0:00:00.797) 0:40:24.038 ******* 2026-02-20 05:36:51.016945 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:51.016957 | orchestrator | 2026-02-20 05:36:51.016969 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 05:36:51.016982 | orchestrator | Friday 20 February 2026 05:36:17 +0000 (0:00:00.805) 0:40:24.844 ******* 2026-02-20 05:36:51.016996 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.017010 | orchestrator | 2026-02-20 05:36:51.017023 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 05:36:51.017037 | orchestrator | Friday 20 February 2026 05:36:18 +0000 (0:00:00.792) 0:40:25.637 ******* 2026-02-20 05:36:51.017051 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.017065 | orchestrator | 2026-02-20 05:36:51.017080 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 05:36:51.017094 | orchestrator | Friday 20 February 2026 05:36:18 +0000 (0:00:00.760) 0:40:26.397 ******* 2026-02-20 05:36:51.017109 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.017119 | orchestrator | 2026-02-20 05:36:51.017128 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 05:36:51.017138 | orchestrator | Friday 20 February 2026 05:36:19 +0000 (0:00:00.816) 0:40:27.214 ******* 2026-02-20 05:36:51.017147 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:51.017156 | orchestrator | 2026-02-20 05:36:51.017166 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 05:36:51.017175 | orchestrator | Friday 20 February 2026 05:36:20 +0000 (0:00:00.821) 0:40:28.035 ******* 2026-02-20 05:36:51.017185 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:51.017194 | orchestrator | 2026-02-20 05:36:51.017247 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 05:36:51.017255 | orchestrator | Friday 20 February 2026 05:36:21 +0000 (0:00:00.762) 0:40:28.797 ******* 2026-02-20 05:36:51.017263 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.017272 | orchestrator | 2026-02-20 05:36:51.017281 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 05:36:51.017294 | orchestrator | Friday 20 February 2026 05:36:22 +0000 (0:00:00.759) 0:40:29.557 ******* 2026-02-20 05:36:51.017308 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.017322 | orchestrator | 2026-02-20 05:36:51.017335 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 05:36:51.017350 | orchestrator | Friday 20 February 2026 05:36:22 +0000 (0:00:00.761) 0:40:30.318 ******* 2026-02-20 05:36:51.017363 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.017390 | orchestrator | 2026-02-20 05:36:51.017404 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 05:36:51.017436 | orchestrator | Friday 20 February 2026 05:36:23 +0000 (0:00:00.748) 0:40:31.066 ******* 2026-02-20 05:36:51.017452 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.017467 | orchestrator | 2026-02-20 05:36:51.017482 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 05:36:51.017496 | orchestrator | Friday 20 February 2026 05:36:24 +0000 (0:00:00.786) 0:40:31.853 ******* 2026-02-20 05:36:51.017510 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.017518 | orchestrator | 2026-02-20 05:36:51.017526 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 05:36:51.017535 | orchestrator | Friday 20 February 2026 05:36:25 +0000 (0:00:00.763) 0:40:32.616 ******* 2026-02-20 05:36:51.017549 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.017562 | orchestrator | 2026-02-20 05:36:51.017575 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 05:36:51.017589 | orchestrator | Friday 20 February 2026 05:36:25 +0000 (0:00:00.780) 0:40:33.397 ******* 2026-02-20 05:36:51.017603 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.017616 | orchestrator | 2026-02-20 05:36:51.017630 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 05:36:51.017644 | orchestrator | Friday 20 February 2026 05:36:26 +0000 (0:00:00.745) 0:40:34.142 ******* 2026-02-20 05:36:51.017658 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.017669 | orchestrator | 2026-02-20 05:36:51.017677 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 05:36:51.017685 | orchestrator | Friday 20 February 2026 05:36:27 +0000 (0:00:00.768) 0:40:34.910 ******* 2026-02-20 05:36:51.017712 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.017720 | orchestrator | 2026-02-20 05:36:51.017728 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 05:36:51.017736 | orchestrator | Friday 20 February 2026 05:36:28 +0000 (0:00:00.817) 0:40:35.728 ******* 2026-02-20 05:36:51.017744 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.017754 | orchestrator | 2026-02-20 05:36:51.017767 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 05:36:51.017780 | orchestrator | Friday 20 February 2026 05:36:29 +0000 (0:00:00.764) 0:40:36.492 ******* 2026-02-20 05:36:51.017793 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.017805 | orchestrator | 2026-02-20 05:36:51.017819 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 05:36:51.017832 | orchestrator | Friday 20 February 2026 05:36:29 +0000 (0:00:00.763) 0:40:37.255 ******* 2026-02-20 05:36:51.017845 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.017858 | orchestrator | 2026-02-20 05:36:51.017871 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 05:36:51.017883 | orchestrator | Friday 20 February 2026 05:36:30 +0000 (0:00:00.811) 0:40:38.067 ******* 2026-02-20 05:36:51.017895 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:51.017908 | orchestrator | 2026-02-20 05:36:51.017920 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 05:36:51.017933 | orchestrator | Friday 20 February 2026 05:36:32 +0000 (0:00:01.551) 0:40:39.619 ******* 2026-02-20 05:36:51.017945 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:51.017958 | orchestrator | 2026-02-20 05:36:51.017972 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 05:36:51.017985 | orchestrator | Friday 20 February 2026 05:36:35 +0000 (0:00:02.956) 0:40:42.576 ******* 2026-02-20 05:36:51.017998 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-20 05:36:51.018011 | orchestrator | 2026-02-20 05:36:51.018101 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-20 05:36:51.018117 | orchestrator | Friday 20 February 2026 05:36:36 +0000 (0:00:01.217) 0:40:43.794 ******* 2026-02-20 05:36:51.018143 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.018157 | orchestrator | 2026-02-20 05:36:51.018171 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-20 05:36:51.018183 | orchestrator | Friday 20 February 2026 05:36:37 +0000 (0:00:01.133) 0:40:44.928 ******* 2026-02-20 05:36:51.018195 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.018243 | orchestrator | 2026-02-20 05:36:51.018296 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-20 05:36:51.018311 | orchestrator | Friday 20 February 2026 05:36:38 +0000 (0:00:01.142) 0:40:46.070 ******* 2026-02-20 05:36:51.018324 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 05:36:51.018338 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 05:36:51.018351 | orchestrator | 2026-02-20 05:36:51.018365 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-20 05:36:51.018377 | orchestrator | Friday 20 February 2026 05:36:40 +0000 (0:00:01.819) 0:40:47.890 ******* 2026-02-20 05:36:51.018389 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:51.018401 | orchestrator | 2026-02-20 05:36:51.018412 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-20 05:36:51.018425 | orchestrator | Friday 20 February 2026 05:36:41 +0000 (0:00:01.503) 0:40:49.393 ******* 2026-02-20 05:36:51.018438 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.018451 | orchestrator | 2026-02-20 05:36:51.018465 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-20 05:36:51.018479 | orchestrator | Friday 20 February 2026 05:36:43 +0000 (0:00:01.122) 0:40:50.516 ******* 2026-02-20 05:36:51.018492 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.018505 | orchestrator | 2026-02-20 05:36:51.018518 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 05:36:51.018531 | orchestrator | Friday 20 February 2026 05:36:43 +0000 (0:00:00.801) 0:40:51.318 ******* 2026-02-20 05:36:51.018543 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.018556 | orchestrator | 2026-02-20 05:36:51.018570 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 05:36:51.018583 | orchestrator | Friday 20 February 2026 05:36:44 +0000 (0:00:00.750) 0:40:52.068 ******* 2026-02-20 05:36:51.018608 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-20 05:36:51.018622 | orchestrator | 2026-02-20 05:36:51.018635 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-20 05:36:51.018648 | orchestrator | Friday 20 February 2026 05:36:45 +0000 (0:00:01.124) 0:40:53.192 ******* 2026-02-20 05:36:51.018661 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:36:51.018674 | orchestrator | 2026-02-20 05:36:51.018687 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-20 05:36:51.018700 | orchestrator | Friday 20 February 2026 05:36:47 +0000 (0:00:01.867) 0:40:55.060 ******* 2026-02-20 05:36:51.018715 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 05:36:51.018727 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 05:36:51.018742 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 05:36:51.018756 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.018768 | orchestrator | 2026-02-20 05:36:51.018781 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-20 05:36:51.018795 | orchestrator | Friday 20 February 2026 05:36:48 +0000 (0:00:01.152) 0:40:56.212 ******* 2026-02-20 05:36:51.018807 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:36:51.018821 | orchestrator | 2026-02-20 05:36:51.018834 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-20 05:36:51.018848 | orchestrator | Friday 20 February 2026 05:36:49 +0000 (0:00:01.096) 0:40:57.309 ******* 2026-02-20 05:36:51.018891 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.714675 | orchestrator | 2026-02-20 05:37:33.714771 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-20 05:37:33.714782 | orchestrator | Friday 20 February 2026 05:36:51 +0000 (0:00:01.177) 0:40:58.486 ******* 2026-02-20 05:37:33.714790 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.714798 | orchestrator | 2026-02-20 05:37:33.714804 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-20 05:37:33.714811 | orchestrator | Friday 20 February 2026 05:36:52 +0000 (0:00:01.147) 0:40:59.634 ******* 2026-02-20 05:37:33.714818 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.714824 | orchestrator | 2026-02-20 05:37:33.714842 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-20 05:37:33.714848 | orchestrator | Friday 20 February 2026 05:36:53 +0000 (0:00:01.134) 0:41:00.768 ******* 2026-02-20 05:37:33.714862 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.714869 | orchestrator | 2026-02-20 05:37:33.714875 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 05:37:33.714882 | orchestrator | Friday 20 February 2026 05:36:54 +0000 (0:00:00.791) 0:41:01.560 ******* 2026-02-20 05:37:33.714888 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:37:33.714896 | orchestrator | 2026-02-20 05:37:33.714902 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 05:37:33.714909 | orchestrator | Friday 20 February 2026 05:36:56 +0000 (0:00:02.142) 0:41:03.703 ******* 2026-02-20 05:37:33.714916 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:37:33.714922 | orchestrator | 2026-02-20 05:37:33.714929 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 05:37:33.714935 | orchestrator | Friday 20 February 2026 05:36:57 +0000 (0:00:00.808) 0:41:04.511 ******* 2026-02-20 05:37:33.714942 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-20 05:37:33.714948 | orchestrator | 2026-02-20 05:37:33.714954 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-20 05:37:33.714961 | orchestrator | Friday 20 February 2026 05:36:58 +0000 (0:00:01.112) 0:41:05.624 ******* 2026-02-20 05:37:33.714967 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.714973 | orchestrator | 2026-02-20 05:37:33.714980 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-20 05:37:33.714986 | orchestrator | Friday 20 February 2026 05:36:59 +0000 (0:00:01.132) 0:41:06.757 ******* 2026-02-20 05:37:33.714992 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.714999 | orchestrator | 2026-02-20 05:37:33.715005 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-20 05:37:33.715011 | orchestrator | Friday 20 February 2026 05:37:00 +0000 (0:00:01.118) 0:41:07.875 ******* 2026-02-20 05:37:33.715018 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.715024 | orchestrator | 2026-02-20 05:37:33.715030 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-20 05:37:33.715036 | orchestrator | Friday 20 February 2026 05:37:01 +0000 (0:00:01.128) 0:41:09.004 ******* 2026-02-20 05:37:33.715043 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.715049 | orchestrator | 2026-02-20 05:37:33.715055 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-20 05:37:33.715062 | orchestrator | Friday 20 February 2026 05:37:02 +0000 (0:00:01.127) 0:41:10.131 ******* 2026-02-20 05:37:33.715068 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.715075 | orchestrator | 2026-02-20 05:37:33.715081 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-20 05:37:33.715088 | orchestrator | Friday 20 February 2026 05:37:03 +0000 (0:00:01.151) 0:41:11.283 ******* 2026-02-20 05:37:33.715094 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.715100 | orchestrator | 2026-02-20 05:37:33.715107 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-20 05:37:33.715113 | orchestrator | Friday 20 February 2026 05:37:04 +0000 (0:00:01.130) 0:41:12.413 ******* 2026-02-20 05:37:33.715138 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.715145 | orchestrator | 2026-02-20 05:37:33.715151 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-20 05:37:33.715157 | orchestrator | Friday 20 February 2026 05:37:06 +0000 (0:00:01.145) 0:41:13.559 ******* 2026-02-20 05:37:33.715164 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.715170 | orchestrator | 2026-02-20 05:37:33.715176 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-20 05:37:33.715216 | orchestrator | Friday 20 February 2026 05:37:07 +0000 (0:00:01.118) 0:41:14.678 ******* 2026-02-20 05:37:33.715228 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:37:33.715238 | orchestrator | 2026-02-20 05:37:33.715250 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 05:37:33.715262 | orchestrator | Friday 20 February 2026 05:37:07 +0000 (0:00:00.784) 0:41:15.462 ******* 2026-02-20 05:37:33.715274 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-20 05:37:33.715284 | orchestrator | 2026-02-20 05:37:33.715292 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-20 05:37:33.715299 | orchestrator | Friday 20 February 2026 05:37:09 +0000 (0:00:01.136) 0:41:16.599 ******* 2026-02-20 05:37:33.715307 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-20 05:37:33.715314 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-20 05:37:33.715322 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-20 05:37:33.715329 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-20 05:37:33.715336 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-20 05:37:33.715343 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-20 05:37:33.715351 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-20 05:37:33.715358 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-20 05:37:33.715366 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 05:37:33.715387 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 05:37:33.715394 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 05:37:33.715402 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 05:37:33.715409 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 05:37:33.715416 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 05:37:33.715423 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-20 05:37:33.715431 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-20 05:37:33.715438 | orchestrator | 2026-02-20 05:37:33.715445 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 05:37:33.715452 | orchestrator | Friday 20 February 2026 05:37:15 +0000 (0:00:06.446) 0:41:23.045 ******* 2026-02-20 05:37:33.715458 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-20 05:37:33.715464 | orchestrator | 2026-02-20 05:37:33.715471 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-20 05:37:33.715477 | orchestrator | Friday 20 February 2026 05:37:16 +0000 (0:00:01.151) 0:41:24.197 ******* 2026-02-20 05:37:33.715483 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 05:37:33.715492 | orchestrator | 2026-02-20 05:37:33.715502 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-20 05:37:33.715512 | orchestrator | Friday 20 February 2026 05:37:18 +0000 (0:00:01.482) 0:41:25.680 ******* 2026-02-20 05:37:33.715522 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 05:37:33.715540 | orchestrator | 2026-02-20 05:37:33.715550 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 05:37:33.715560 | orchestrator | Friday 20 February 2026 05:37:19 +0000 (0:00:01.603) 0:41:27.284 ******* 2026-02-20 05:37:33.715569 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.715578 | orchestrator | 2026-02-20 05:37:33.715586 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 05:37:33.715596 | orchestrator | Friday 20 February 2026 05:37:20 +0000 (0:00:00.783) 0:41:28.068 ******* 2026-02-20 05:37:33.715605 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.715616 | orchestrator | 2026-02-20 05:37:33.715625 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 05:37:33.715634 | orchestrator | Friday 20 February 2026 05:37:21 +0000 (0:00:00.787) 0:41:28.855 ******* 2026-02-20 05:37:33.715643 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.715652 | orchestrator | 2026-02-20 05:37:33.715662 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 05:37:33.715672 | orchestrator | Friday 20 February 2026 05:37:22 +0000 (0:00:00.781) 0:41:29.636 ******* 2026-02-20 05:37:33.715681 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.715690 | orchestrator | 2026-02-20 05:37:33.715700 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 05:37:33.715708 | orchestrator | Friday 20 February 2026 05:37:22 +0000 (0:00:00.783) 0:41:30.420 ******* 2026-02-20 05:37:33.715716 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.715725 | orchestrator | 2026-02-20 05:37:33.715734 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 05:37:33.715743 | orchestrator | Friday 20 February 2026 05:37:23 +0000 (0:00:00.760) 0:41:31.181 ******* 2026-02-20 05:37:33.715751 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.715761 | orchestrator | 2026-02-20 05:37:33.715771 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 05:37:33.715781 | orchestrator | Friday 20 February 2026 05:37:24 +0000 (0:00:00.774) 0:41:31.955 ******* 2026-02-20 05:37:33.715789 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.715799 | orchestrator | 2026-02-20 05:37:33.715808 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 05:37:33.715816 | orchestrator | Friday 20 February 2026 05:37:25 +0000 (0:00:00.771) 0:41:32.726 ******* 2026-02-20 05:37:33.715825 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.715860 | orchestrator | 2026-02-20 05:37:33.715877 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 05:37:33.715887 | orchestrator | Friday 20 February 2026 05:37:26 +0000 (0:00:00.778) 0:41:33.505 ******* 2026-02-20 05:37:33.715897 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.715907 | orchestrator | 2026-02-20 05:37:33.715916 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 05:37:33.715925 | orchestrator | Friday 20 February 2026 05:37:26 +0000 (0:00:00.765) 0:41:34.270 ******* 2026-02-20 05:37:33.715935 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:37:33.715944 | orchestrator | 2026-02-20 05:37:33.715954 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 05:37:33.715963 | orchestrator | Friday 20 February 2026 05:37:27 +0000 (0:00:00.761) 0:41:35.031 ******* 2026-02-20 05:37:33.715972 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:37:33.715982 | orchestrator | 2026-02-20 05:37:33.715991 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 05:37:33.716000 | orchestrator | Friday 20 February 2026 05:37:28 +0000 (0:00:00.845) 0:41:35.877 ******* 2026-02-20 05:37:33.716011 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-20 05:37:33.716020 | orchestrator | 2026-02-20 05:37:33.716030 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 05:37:33.716039 | orchestrator | Friday 20 February 2026 05:37:32 +0000 (0:00:04.485) 0:41:40.363 ******* 2026-02-20 05:37:33.716069 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 05:38:16.006810 | orchestrator | 2026-02-20 05:38:16.006931 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 05:38:16.006950 | orchestrator | Friday 20 February 2026 05:37:33 +0000 (0:00:00.824) 0:41:41.187 ******* 2026-02-20 05:38:16.007002 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-20 05:38:16.007018 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-20 05:38:16.007031 | orchestrator | 2026-02-20 05:38:16.007042 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 05:38:16.007053 | orchestrator | Friday 20 February 2026 05:37:41 +0000 (0:00:08.059) 0:41:49.247 ******* 2026-02-20 05:38:16.007064 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:38:16.007076 | orchestrator | 2026-02-20 05:38:16.007088 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 05:38:16.007100 | orchestrator | Friday 20 February 2026 05:37:42 +0000 (0:00:00.768) 0:41:50.016 ******* 2026-02-20 05:38:16.007111 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:38:16.007122 | orchestrator | 2026-02-20 05:38:16.007134 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:38:16.007146 | orchestrator | Friday 20 February 2026 05:37:43 +0000 (0:00:00.806) 0:41:50.823 ******* 2026-02-20 05:38:16.007157 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:38:16.007168 | orchestrator | 2026-02-20 05:38:16.007179 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:38:16.007190 | orchestrator | Friday 20 February 2026 05:37:44 +0000 (0:00:00.787) 0:41:51.610 ******* 2026-02-20 05:38:16.007201 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:38:16.007212 | orchestrator | 2026-02-20 05:38:16.007223 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:38:16.007234 | orchestrator | Friday 20 February 2026 05:37:44 +0000 (0:00:00.762) 0:41:52.372 ******* 2026-02-20 05:38:16.007245 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:38:16.007256 | orchestrator | 2026-02-20 05:38:16.007267 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:38:16.007278 | orchestrator | Friday 20 February 2026 05:37:45 +0000 (0:00:00.807) 0:41:53.180 ******* 2026-02-20 05:38:16.007289 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:38:16.007302 | orchestrator | 2026-02-20 05:38:16.007312 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:38:16.007323 | orchestrator | Friday 20 February 2026 05:37:46 +0000 (0:00:00.887) 0:41:54.068 ******* 2026-02-20 05:38:16.007335 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 05:38:16.007348 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 05:38:16.007361 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 05:38:16.007374 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:38:16.007387 | orchestrator | 2026-02-20 05:38:16.007400 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:38:16.007413 | orchestrator | Friday 20 February 2026 05:37:47 +0000 (0:00:01.084) 0:41:55.153 ******* 2026-02-20 05:38:16.007426 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 05:38:16.007465 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 05:38:16.007478 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 05:38:16.007490 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:38:16.007503 | orchestrator | 2026-02-20 05:38:16.007516 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:38:16.007543 | orchestrator | Friday 20 February 2026 05:37:48 +0000 (0:00:01.041) 0:41:56.195 ******* 2026-02-20 05:38:16.007555 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 05:38:16.007565 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 05:38:16.007576 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 05:38:16.007587 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:38:16.007598 | orchestrator | 2026-02-20 05:38:16.007609 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:38:16.007620 | orchestrator | Friday 20 February 2026 05:37:49 +0000 (0:00:01.059) 0:41:57.255 ******* 2026-02-20 05:38:16.007631 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:38:16.007642 | orchestrator | 2026-02-20 05:38:16.007653 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:38:16.007663 | orchestrator | Friday 20 February 2026 05:37:50 +0000 (0:00:00.805) 0:41:58.061 ******* 2026-02-20 05:38:16.007674 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-20 05:38:16.007685 | orchestrator | 2026-02-20 05:38:16.007696 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 05:38:16.007707 | orchestrator | Friday 20 February 2026 05:37:51 +0000 (0:00:01.004) 0:41:59.065 ******* 2026-02-20 05:38:16.007720 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:38:16.007740 | orchestrator | 2026-02-20 05:38:16.007765 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-20 05:38:16.007792 | orchestrator | Friday 20 February 2026 05:37:52 +0000 (0:00:01.361) 0:42:00.426 ******* 2026-02-20 05:38:16.007808 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:38:16.007825 | orchestrator | 2026-02-20 05:38:16.007864 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-20 05:38:16.007882 | orchestrator | Friday 20 February 2026 05:37:53 +0000 (0:00:00.821) 0:42:01.248 ******* 2026-02-20 05:38:16.007900 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:38:16.007918 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:38:16.007938 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:38:16.007985 | orchestrator | 2026-02-20 05:38:16.008004 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-20 05:38:16.008024 | orchestrator | Friday 20 February 2026 05:37:55 +0000 (0:00:01.317) 0:42:02.565 ******* 2026-02-20 05:38:16.008036 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-02-20 05:38:16.008047 | orchestrator | 2026-02-20 05:38:16.008058 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-20 05:38:16.008068 | orchestrator | Friday 20 February 2026 05:37:56 +0000 (0:00:01.087) 0:42:03.654 ******* 2026-02-20 05:38:16.008079 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:38:16.008095 | orchestrator | 2026-02-20 05:38:16.008112 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-20 05:38:16.008128 | orchestrator | Friday 20 February 2026 05:37:57 +0000 (0:00:01.089) 0:42:04.743 ******* 2026-02-20 05:38:16.008146 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:38:16.008166 | orchestrator | 2026-02-20 05:38:16.008185 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-20 05:38:16.008197 | orchestrator | Friday 20 February 2026 05:37:58 +0000 (0:00:01.121) 0:42:05.865 ******* 2026-02-20 05:38:16.008208 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:38:16.008219 | orchestrator | 2026-02-20 05:38:16.008242 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-20 05:38:16.008253 | orchestrator | Friday 20 February 2026 05:37:59 +0000 (0:00:01.571) 0:42:07.437 ******* 2026-02-20 05:38:16.008264 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:38:16.008275 | orchestrator | 2026-02-20 05:38:16.008286 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-20 05:38:16.008297 | orchestrator | Friday 20 February 2026 05:38:01 +0000 (0:00:01.174) 0:42:08.611 ******* 2026-02-20 05:38:16.008308 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-20 05:38:16.008319 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-20 05:38:16.008330 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-20 05:38:16.008341 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-20 05:38:16.008352 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-20 05:38:16.008363 | orchestrator | 2026-02-20 05:38:16.008374 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-20 05:38:16.008384 | orchestrator | Friday 20 February 2026 05:38:04 +0000 (0:00:03.545) 0:42:12.157 ******* 2026-02-20 05:38:16.008395 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:38:16.008406 | orchestrator | 2026-02-20 05:38:16.008417 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-20 05:38:16.008428 | orchestrator | Friday 20 February 2026 05:38:05 +0000 (0:00:00.760) 0:42:12.917 ******* 2026-02-20 05:38:16.008439 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-02-20 05:38:16.008450 | orchestrator | 2026-02-20 05:38:16.008460 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-20 05:38:16.008471 | orchestrator | Friday 20 February 2026 05:38:06 +0000 (0:00:01.094) 0:42:14.011 ******* 2026-02-20 05:38:16.008482 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-20 05:38:16.008493 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-20 05:38:16.008504 | orchestrator | 2026-02-20 05:38:16.008515 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-20 05:38:16.008526 | orchestrator | Friday 20 February 2026 05:38:08 +0000 (0:00:01.832) 0:42:15.843 ******* 2026-02-20 05:38:16.008544 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 05:38:16.008556 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-20 05:38:16.008567 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 05:38:16.008577 | orchestrator | 2026-02-20 05:38:16.008588 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-20 05:38:16.008599 | orchestrator | Friday 20 February 2026 05:38:11 +0000 (0:00:03.545) 0:42:19.389 ******* 2026-02-20 05:38:16.008610 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-20 05:38:16.008621 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-20 05:38:16.008632 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:38:16.008643 | orchestrator | 2026-02-20 05:38:16.008654 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-20 05:38:16.008664 | orchestrator | Friday 20 February 2026 05:38:13 +0000 (0:00:01.660) 0:42:21.050 ******* 2026-02-20 05:38:16.008675 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:38:16.008686 | orchestrator | 2026-02-20 05:38:16.008697 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-20 05:38:16.008708 | orchestrator | Friday 20 February 2026 05:38:14 +0000 (0:00:00.876) 0:42:21.927 ******* 2026-02-20 05:38:16.008719 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:38:16.008730 | orchestrator | 2026-02-20 05:38:16.008740 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-20 05:38:16.008751 | orchestrator | Friday 20 February 2026 05:38:15 +0000 (0:00:00.755) 0:42:22.683 ******* 2026-02-20 05:38:16.008770 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:38:16.008781 | orchestrator | 2026-02-20 05:38:16.008800 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-20 05:39:22.869381 | orchestrator | Friday 20 February 2026 05:38:15 +0000 (0:00:00.795) 0:42:23.478 ******* 2026-02-20 05:39:22.869490 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-02-20 05:39:22.869503 | orchestrator | 2026-02-20 05:39:22.869513 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-20 05:39:22.869522 | orchestrator | Friday 20 February 2026 05:38:17 +0000 (0:00:01.099) 0:42:24.577 ******* 2026-02-20 05:39:22.869532 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:39:22.869541 | orchestrator | 2026-02-20 05:39:22.869550 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-20 05:39:22.869560 | orchestrator | Friday 20 February 2026 05:38:18 +0000 (0:00:01.502) 0:42:26.080 ******* 2026-02-20 05:39:22.869569 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:39:22.869589 | orchestrator | 2026-02-20 05:39:22.869598 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-20 05:39:22.869607 | orchestrator | Friday 20 February 2026 05:38:22 +0000 (0:00:03.535) 0:42:29.616 ******* 2026-02-20 05:39:22.869616 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-02-20 05:39:22.869625 | orchestrator | 2026-02-20 05:39:22.869688 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-20 05:39:22.869698 | orchestrator | Friday 20 February 2026 05:38:23 +0000 (0:00:01.108) 0:42:30.725 ******* 2026-02-20 05:39:22.869707 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:39:22.869715 | orchestrator | 2026-02-20 05:39:22.869724 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-20 05:39:22.869733 | orchestrator | Friday 20 February 2026 05:38:25 +0000 (0:00:01.936) 0:42:32.662 ******* 2026-02-20 05:39:22.869742 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:39:22.869751 | orchestrator | 2026-02-20 05:39:22.869759 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-20 05:39:22.869768 | orchestrator | Friday 20 February 2026 05:38:27 +0000 (0:00:01.912) 0:42:34.574 ******* 2026-02-20 05:39:22.869777 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:39:22.869786 | orchestrator | 2026-02-20 05:39:22.869794 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-20 05:39:22.869803 | orchestrator | Friday 20 February 2026 05:38:29 +0000 (0:00:02.219) 0:42:36.793 ******* 2026-02-20 05:39:22.869811 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:39:22.869821 | orchestrator | 2026-02-20 05:39:22.869830 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-20 05:39:22.869838 | orchestrator | Friday 20 February 2026 05:38:30 +0000 (0:00:01.166) 0:42:37.960 ******* 2026-02-20 05:39:22.869847 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:39:22.869856 | orchestrator | 2026-02-20 05:39:22.869865 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-20 05:39:22.869873 | orchestrator | Friday 20 February 2026 05:38:31 +0000 (0:00:01.161) 0:42:39.122 ******* 2026-02-20 05:39:22.869882 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-20 05:39:22.869891 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-02-20 05:39:22.869899 | orchestrator | 2026-02-20 05:39:22.869908 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-20 05:39:22.869917 | orchestrator | Friday 20 February 2026 05:38:33 +0000 (0:00:01.798) 0:42:40.921 ******* 2026-02-20 05:39:22.869928 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-20 05:39:22.869938 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-02-20 05:39:22.869948 | orchestrator | 2026-02-20 05:39:22.869958 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-20 05:39:22.869968 | orchestrator | Friday 20 February 2026 05:38:36 +0000 (0:00:02.929) 0:42:43.850 ******* 2026-02-20 05:39:22.869978 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-20 05:39:22.870011 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-20 05:39:22.870077 | orchestrator | 2026-02-20 05:39:22.870088 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-20 05:39:22.870098 | orchestrator | Friday 20 February 2026 05:38:40 +0000 (0:00:04.510) 0:42:48.361 ******* 2026-02-20 05:39:22.870108 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:39:22.870118 | orchestrator | 2026-02-20 05:39:22.870128 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-20 05:39:22.870138 | orchestrator | Friday 20 February 2026 05:38:41 +0000 (0:00:00.862) 0:42:49.224 ******* 2026-02-20 05:39:22.870170 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:39:22.870180 | orchestrator | 2026-02-20 05:39:22.870191 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-20 05:39:22.870201 | orchestrator | Friday 20 February 2026 05:38:42 +0000 (0:00:00.860) 0:42:50.084 ******* 2026-02-20 05:39:22.870210 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:39:22.870218 | orchestrator | 2026-02-20 05:39:22.870227 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-20 05:39:22.870235 | orchestrator | Friday 20 February 2026 05:38:43 +0000 (0:00:00.881) 0:42:50.966 ******* 2026-02-20 05:39:22.870244 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:39:22.870253 | orchestrator | 2026-02-20 05:39:22.870262 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-20 05:39:22.870270 | orchestrator | Friday 20 February 2026 05:38:44 +0000 (0:00:00.762) 0:42:51.728 ******* 2026-02-20 05:39:22.870279 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:39:22.870288 | orchestrator | 2026-02-20 05:39:22.870296 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-20 05:39:22.870305 | orchestrator | Friday 20 February 2026 05:38:45 +0000 (0:00:00.763) 0:42:52.492 ******* 2026-02-20 05:39:22.870314 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-20 05:39:22.870324 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-20 05:39:22.870333 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-20 05:39:22.870367 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-02-20 05:39:22.870377 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (596 retries left). 2026-02-20 05:39:22.870386 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:39:22.870394 | orchestrator | 2026-02-20 05:39:22.870403 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-20 05:39:22.870412 | orchestrator | 2026-02-20 05:39:22.870420 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:39:22.870429 | orchestrator | Friday 20 February 2026 05:39:02 +0000 (0:00:17.005) 0:43:09.497 ******* 2026-02-20 05:39:22.870438 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-20 05:39:22.870446 | orchestrator | 2026-02-20 05:39:22.870455 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 05:39:22.870463 | orchestrator | Friday 20 February 2026 05:39:03 +0000 (0:00:01.261) 0:43:10.759 ******* 2026-02-20 05:39:22.870472 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:22.870481 | orchestrator | 2026-02-20 05:39:22.870489 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 05:39:22.870498 | orchestrator | Friday 20 February 2026 05:39:04 +0000 (0:00:01.457) 0:43:12.216 ******* 2026-02-20 05:39:22.870506 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:22.870515 | orchestrator | 2026-02-20 05:39:22.870524 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:39:22.870532 | orchestrator | Friday 20 February 2026 05:39:05 +0000 (0:00:01.136) 0:43:13.353 ******* 2026-02-20 05:39:22.870549 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:22.870558 | orchestrator | 2026-02-20 05:39:22.870566 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:39:22.870575 | orchestrator | Friday 20 February 2026 05:39:07 +0000 (0:00:01.468) 0:43:14.822 ******* 2026-02-20 05:39:22.870583 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:22.870592 | orchestrator | 2026-02-20 05:39:22.870601 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 05:39:22.870609 | orchestrator | Friday 20 February 2026 05:39:08 +0000 (0:00:01.141) 0:43:15.963 ******* 2026-02-20 05:39:22.870618 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:22.870627 | orchestrator | 2026-02-20 05:39:22.870652 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 05:39:22.870661 | orchestrator | Friday 20 February 2026 05:39:09 +0000 (0:00:01.145) 0:43:17.109 ******* 2026-02-20 05:39:22.870669 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:22.870678 | orchestrator | 2026-02-20 05:39:22.870687 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 05:39:22.870697 | orchestrator | Friday 20 February 2026 05:39:10 +0000 (0:00:01.155) 0:43:18.265 ******* 2026-02-20 05:39:22.870712 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:39:22.870726 | orchestrator | 2026-02-20 05:39:22.870741 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 05:39:22.870756 | orchestrator | Friday 20 February 2026 05:39:11 +0000 (0:00:01.119) 0:43:19.384 ******* 2026-02-20 05:39:22.870771 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:22.870785 | orchestrator | 2026-02-20 05:39:22.870799 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 05:39:22.870813 | orchestrator | Friday 20 February 2026 05:39:13 +0000 (0:00:01.142) 0:43:20.527 ******* 2026-02-20 05:39:22.870827 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:39:22.870840 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:39:22.870854 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:39:22.870867 | orchestrator | 2026-02-20 05:39:22.870882 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 05:39:22.870896 | orchestrator | Friday 20 February 2026 05:39:14 +0000 (0:00:01.950) 0:43:22.477 ******* 2026-02-20 05:39:22.870910 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:22.870925 | orchestrator | 2026-02-20 05:39:22.870941 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 05:39:22.870967 | orchestrator | Friday 20 February 2026 05:39:16 +0000 (0:00:01.238) 0:43:23.716 ******* 2026-02-20 05:39:22.870984 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:39:22.871001 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:39:22.871018 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:39:22.871036 | orchestrator | 2026-02-20 05:39:22.871052 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 05:39:22.871068 | orchestrator | Friday 20 February 2026 05:39:19 +0000 (0:00:03.250) 0:43:26.967 ******* 2026-02-20 05:39:22.871086 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-20 05:39:22.871105 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-20 05:39:22.871123 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-20 05:39:22.871141 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:39:22.871159 | orchestrator | 2026-02-20 05:39:22.871176 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 05:39:22.871195 | orchestrator | Friday 20 February 2026 05:39:21 +0000 (0:00:01.741) 0:43:28.709 ******* 2026-02-20 05:39:22.871215 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 05:39:22.871267 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 05:39:42.073404 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 05:39:42.073607 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:39:42.073642 | orchestrator | 2026-02-20 05:39:42.073663 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 05:39:42.073684 | orchestrator | Friday 20 February 2026 05:39:22 +0000 (0:00:01.631) 0:43:30.341 ******* 2026-02-20 05:39:42.073706 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:42.073721 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:42.073733 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:42.073745 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:39:42.073757 | orchestrator | 2026-02-20 05:39:42.073768 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 05:39:42.073779 | orchestrator | Friday 20 February 2026 05:39:24 +0000 (0:00:01.210) 0:43:31.551 ******* 2026-02-20 05:39:42.073792 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'fa7fdf54d9d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 05:39:17.061455', 'end': '2026-02-20 05:39:17.126246', 'delta': '0:00:00.064791', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fa7fdf54d9d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 05:39:42.073824 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '9edb2d04dfb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 05:39:17.627276', 'end': '2026-02-20 05:39:17.669359', 'delta': '0:00:00.042083', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9edb2d04dfb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 05:39:42.073883 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '18841505eff4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 05:39:18.264642', 'end': '2026-02-20 05:39:18.322221', 'delta': '0:00:00.057579', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['18841505eff4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 05:39:42.073896 | orchestrator | 2026-02-20 05:39:42.073908 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 05:39:42.073919 | orchestrator | Friday 20 February 2026 05:39:25 +0000 (0:00:01.168) 0:43:32.720 ******* 2026-02-20 05:39:42.073930 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:42.073945 | orchestrator | 2026-02-20 05:39:42.073958 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 05:39:42.073971 | orchestrator | Friday 20 February 2026 05:39:26 +0000 (0:00:01.227) 0:43:33.948 ******* 2026-02-20 05:39:42.073984 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:39:42.073996 | orchestrator | 2026-02-20 05:39:42.074009 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 05:39:42.074082 | orchestrator | Friday 20 February 2026 05:39:27 +0000 (0:00:01.240) 0:43:35.188 ******* 2026-02-20 05:39:42.074094 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:42.074105 | orchestrator | 2026-02-20 05:39:42.074116 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 05:39:42.074127 | orchestrator | Friday 20 February 2026 05:39:28 +0000 (0:00:01.159) 0:43:36.347 ******* 2026-02-20 05:39:42.074147 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:39:42.074159 | orchestrator | 2026-02-20 05:39:42.074170 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:39:42.074181 | orchestrator | Friday 20 February 2026 05:39:31 +0000 (0:00:02.436) 0:43:38.783 ******* 2026-02-20 05:39:42.074207 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:42.074219 | orchestrator | 2026-02-20 05:39:42.074240 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 05:39:42.074251 | orchestrator | Friday 20 February 2026 05:39:32 +0000 (0:00:01.136) 0:43:39.920 ******* 2026-02-20 05:39:42.074262 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:39:42.074273 | orchestrator | 2026-02-20 05:39:42.074284 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 05:39:42.074295 | orchestrator | Friday 20 February 2026 05:39:33 +0000 (0:00:01.096) 0:43:41.017 ******* 2026-02-20 05:39:42.074306 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:39:42.074317 | orchestrator | 2026-02-20 05:39:42.074328 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:39:42.074339 | orchestrator | Friday 20 February 2026 05:39:34 +0000 (0:00:01.185) 0:43:42.203 ******* 2026-02-20 05:39:42.074349 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:39:42.074360 | orchestrator | 2026-02-20 05:39:42.074371 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 05:39:42.074382 | orchestrator | Friday 20 February 2026 05:39:35 +0000 (0:00:01.102) 0:43:43.305 ******* 2026-02-20 05:39:42.074393 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:39:42.074404 | orchestrator | 2026-02-20 05:39:42.074415 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 05:39:42.074426 | orchestrator | Friday 20 February 2026 05:39:36 +0000 (0:00:01.082) 0:43:44.388 ******* 2026-02-20 05:39:42.074447 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:42.074458 | orchestrator | 2026-02-20 05:39:42.074469 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 05:39:42.074479 | orchestrator | Friday 20 February 2026 05:39:37 +0000 (0:00:00.974) 0:43:45.362 ******* 2026-02-20 05:39:42.074490 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:39:42.074501 | orchestrator | 2026-02-20 05:39:42.074512 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 05:39:42.074523 | orchestrator | Friday 20 February 2026 05:39:38 +0000 (0:00:00.911) 0:43:46.273 ******* 2026-02-20 05:39:42.074534 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:42.074604 | orchestrator | 2026-02-20 05:39:42.074618 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 05:39:42.074630 | orchestrator | Friday 20 February 2026 05:39:39 +0000 (0:00:00.962) 0:43:47.236 ******* 2026-02-20 05:39:42.074641 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:39:42.074652 | orchestrator | 2026-02-20 05:39:42.074669 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 05:39:42.074681 | orchestrator | Friday 20 February 2026 05:39:40 +0000 (0:00:00.954) 0:43:48.191 ******* 2026-02-20 05:39:42.074692 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:42.074703 | orchestrator | 2026-02-20 05:39:42.074714 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 05:39:42.074724 | orchestrator | Friday 20 February 2026 05:39:41 +0000 (0:00:01.069) 0:43:49.260 ******* 2026-02-20 05:39:42.074736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:39:42.074757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2', 'dm-uuid-LVM-M2C8S8UVlhXUKOUHVYtq26o4q1qe3SqLeECzduiKtPWWIbJabB9IOMLcCGA61b8F'], 'uuids': ['81982070-0591-4c7e-bdd5-9c8a78ca773c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '35df89d5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F']}})  2026-02-20 05:39:42.079236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8', 'scsi-SQEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '71e39072', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:39:42.079299 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-tuZx2P-aY2U-pmBQ-dfdb-DpNW-3dKP-AWDCQ4', 'scsi-0QEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57', 'scsi-SQEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7c48204c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae']}})  2026-02-20 05:39:42.079330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:39:42.079344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:39:42.079367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 05:39:42.079379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:39:42.079391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b', 'dm-uuid-CRYPT-LUKS2-7f5d4cd4cc71449e82aac2f81f5aced6-nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 05:39:42.079416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:39:42.079428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae', 'dm-uuid-LVM-5ANCf8I6gapO5QY5OQELwuYZqJbHngb8nyP1EwPXTNj49fiX6eCZwsotaWM2Mv9b'], 'uuids': ['7f5d4cd4-cc71-449e-82aa-c2f81f5aced6'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7c48204c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b']}})  2026-02-20 05:39:42.079441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-4WM9YE-fUvA-MoYX-Wgng-L1JX-iz7g-FbCOv0', 'scsi-0QEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9', 'scsi-SQEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '35df89d5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2']}})  2026-02-20 05:39:42.079460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:39:42.079490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'be990183', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:39:43.184113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:39:43.184210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:39:43.184251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F', 'dm-uuid-CRYPT-LUKS2-8198207005914c7ebdd59c8a78ca773c-eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 05:39:43.184265 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:39:43.184278 | orchestrator | 2026-02-20 05:39:43.184289 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 05:39:43.184300 | orchestrator | Friday 20 February 2026 05:39:42 +0000 (0:00:01.209) 0:43:50.470 ******* 2026-02-20 05:39:43.184311 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:43.184339 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2', 'dm-uuid-LVM-M2C8S8UVlhXUKOUHVYtq26o4q1qe3SqLeECzduiKtPWWIbJabB9IOMLcCGA61b8F'], 'uuids': ['81982070-0591-4c7e-bdd5-9c8a78ca773c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '35df89d5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:43.184351 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8', 'scsi-SQEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '71e39072', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:43.184379 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-tuZx2P-aY2U-pmBQ-dfdb-DpNW-3dKP-AWDCQ4', 'scsi-0QEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57', 'scsi-SQEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7c48204c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:43.184400 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:43.184411 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:43.184427 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:43.184438 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:43.184455 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b', 'dm-uuid-CRYPT-LUKS2-7f5d4cd4cc71449e82aac2f81f5aced6-nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:47.765282 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:47.765418 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae', 'dm-uuid-LVM-5ANCf8I6gapO5QY5OQELwuYZqJbHngb8nyP1EwPXTNj49fiX6eCZwsotaWM2Mv9b'], 'uuids': ['7f5d4cd4-cc71-449e-82aa-c2f81f5aced6'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7c48204c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:47.765453 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-4WM9YE-fUvA-MoYX-Wgng-L1JX-iz7g-FbCOv0', 'scsi-0QEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9', 'scsi-SQEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '35df89d5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:47.765471 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:47.765507 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'be990183', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:47.765632 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:47.765653 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:47.765666 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F', 'dm-uuid-CRYPT-LUKS2-8198207005914c7ebdd59c8a78ca773c-eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:39:47.765679 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:39:47.765693 | orchestrator | 2026-02-20 05:39:47.765706 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 05:39:47.765718 | orchestrator | Friday 20 February 2026 05:39:44 +0000 (0:00:01.149) 0:43:51.619 ******* 2026-02-20 05:39:47.765730 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:47.765743 | orchestrator | 2026-02-20 05:39:47.765754 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 05:39:47.765765 | orchestrator | Friday 20 February 2026 05:39:45 +0000 (0:00:01.269) 0:43:52.889 ******* 2026-02-20 05:39:47.765776 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:47.765787 | orchestrator | 2026-02-20 05:39:47.765805 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:39:47.765818 | orchestrator | Friday 20 February 2026 05:39:46 +0000 (0:00:01.080) 0:43:53.970 ******* 2026-02-20 05:39:47.765831 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:39:47.765844 | orchestrator | 2026-02-20 05:39:47.765879 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:39:47.765902 | orchestrator | Friday 20 February 2026 05:39:47 +0000 (0:00:01.272) 0:43:55.242 ******* 2026-02-20 05:40:29.077532 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:40:29.077657 | orchestrator | 2026-02-20 05:40:29.077675 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:40:29.077689 | orchestrator | Friday 20 February 2026 05:39:48 +0000 (0:00:01.086) 0:43:56.328 ******* 2026-02-20 05:40:29.077701 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:40:29.077712 | orchestrator | 2026-02-20 05:40:29.077724 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:40:29.077735 | orchestrator | Friday 20 February 2026 05:39:50 +0000 (0:00:01.182) 0:43:57.511 ******* 2026-02-20 05:40:29.077747 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:40:29.077758 | orchestrator | 2026-02-20 05:40:29.077769 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 05:40:29.077780 | orchestrator | Friday 20 February 2026 05:39:51 +0000 (0:00:01.131) 0:43:58.643 ******* 2026-02-20 05:40:29.077792 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-20 05:40:29.077803 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-20 05:40:29.077814 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-20 05:40:29.077825 | orchestrator | 2026-02-20 05:40:29.077836 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 05:40:29.077847 | orchestrator | Friday 20 February 2026 05:39:52 +0000 (0:00:01.842) 0:44:00.485 ******* 2026-02-20 05:40:29.077858 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-20 05:40:29.077869 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-20 05:40:29.077881 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-20 05:40:29.077892 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:40:29.077903 | orchestrator | 2026-02-20 05:40:29.077914 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 05:40:29.077925 | orchestrator | Friday 20 February 2026 05:39:54 +0000 (0:00:01.132) 0:44:01.618 ******* 2026-02-20 05:40:29.077936 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-20 05:40:29.077952 | orchestrator | 2026-02-20 05:40:29.077973 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:40:29.077993 | orchestrator | Friday 20 February 2026 05:39:55 +0000 (0:00:01.125) 0:44:02.744 ******* 2026-02-20 05:40:29.078012 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:40:29.078109 | orchestrator | 2026-02-20 05:40:29.078130 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:40:29.078150 | orchestrator | Friday 20 February 2026 05:39:56 +0000 (0:00:01.110) 0:44:03.854 ******* 2026-02-20 05:40:29.078170 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:40:29.078189 | orchestrator | 2026-02-20 05:40:29.078243 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:40:29.078257 | orchestrator | Friday 20 February 2026 05:39:57 +0000 (0:00:01.107) 0:44:04.961 ******* 2026-02-20 05:40:29.078270 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:40:29.078284 | orchestrator | 2026-02-20 05:40:29.078298 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:40:29.078310 | orchestrator | Friday 20 February 2026 05:39:58 +0000 (0:00:01.126) 0:44:06.088 ******* 2026-02-20 05:40:29.078323 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:40:29.078336 | orchestrator | 2026-02-20 05:40:29.078389 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:40:29.078426 | orchestrator | Friday 20 February 2026 05:39:59 +0000 (0:00:01.207) 0:44:07.295 ******* 2026-02-20 05:40:29.078438 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 05:40:29.078448 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 05:40:29.078459 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 05:40:29.078470 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:40:29.078481 | orchestrator | 2026-02-20 05:40:29.078499 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:40:29.078519 | orchestrator | Friday 20 February 2026 05:40:01 +0000 (0:00:01.416) 0:44:08.712 ******* 2026-02-20 05:40:29.078537 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 05:40:29.078555 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 05:40:29.078572 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 05:40:29.078590 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:40:29.078608 | orchestrator | 2026-02-20 05:40:29.078626 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:40:29.078646 | orchestrator | Friday 20 February 2026 05:40:02 +0000 (0:00:01.385) 0:44:10.097 ******* 2026-02-20 05:40:29.078665 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 05:40:29.078685 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 05:40:29.078704 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 05:40:29.078723 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:40:29.078742 | orchestrator | 2026-02-20 05:40:29.078754 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:40:29.078765 | orchestrator | Friday 20 February 2026 05:40:04 +0000 (0:00:01.391) 0:44:11.489 ******* 2026-02-20 05:40:29.078776 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:40:29.078787 | orchestrator | 2026-02-20 05:40:29.078798 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:40:29.078809 | orchestrator | Friday 20 February 2026 05:40:05 +0000 (0:00:01.128) 0:44:12.618 ******* 2026-02-20 05:40:29.078820 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-20 05:40:29.078831 | orchestrator | 2026-02-20 05:40:29.078842 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 05:40:29.078853 | orchestrator | Friday 20 February 2026 05:40:06 +0000 (0:00:01.677) 0:44:14.295 ******* 2026-02-20 05:40:29.078888 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:40:29.078908 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:40:29.078926 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:40:29.078944 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:40:29.078960 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:40:29.078978 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-20 05:40:29.078995 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:40:29.079013 | orchestrator | 2026-02-20 05:40:29.079031 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 05:40:29.079048 | orchestrator | Friday 20 February 2026 05:40:08 +0000 (0:00:02.172) 0:44:16.468 ******* 2026-02-20 05:40:29.079065 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:40:29.079084 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:40:29.079101 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:40:29.079120 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:40:29.079157 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:40:29.079175 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-20 05:40:29.079193 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:40:29.079211 | orchestrator | 2026-02-20 05:40:29.079230 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-20 05:40:29.079250 | orchestrator | Friday 20 February 2026 05:40:11 +0000 (0:00:02.237) 0:44:18.706 ******* 2026-02-20 05:40:29.079269 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:40:29.079287 | orchestrator | 2026-02-20 05:40:29.079298 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-20 05:40:29.079309 | orchestrator | Friday 20 February 2026 05:40:12 +0000 (0:00:01.112) 0:44:19.818 ******* 2026-02-20 05:40:29.079320 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:40:29.079331 | orchestrator | 2026-02-20 05:40:29.079384 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-20 05:40:29.079400 | orchestrator | Friday 20 February 2026 05:40:13 +0000 (0:00:00.776) 0:44:20.595 ******* 2026-02-20 05:40:29.079411 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:40:29.079422 | orchestrator | 2026-02-20 05:40:29.079434 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-20 05:40:29.079445 | orchestrator | Friday 20 February 2026 05:40:13 +0000 (0:00:00.870) 0:44:21.466 ******* 2026-02-20 05:40:29.079456 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-20 05:40:29.079467 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-20 05:40:29.079478 | orchestrator | 2026-02-20 05:40:29.079489 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 05:40:29.079500 | orchestrator | Friday 20 February 2026 05:40:17 +0000 (0:00:03.797) 0:44:25.264 ******* 2026-02-20 05:40:29.079522 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-20 05:40:29.079534 | orchestrator | 2026-02-20 05:40:29.079545 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 05:40:29.079556 | orchestrator | Friday 20 February 2026 05:40:18 +0000 (0:00:01.102) 0:44:26.367 ******* 2026-02-20 05:40:29.079567 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-20 05:40:29.079578 | orchestrator | 2026-02-20 05:40:29.079589 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 05:40:29.079600 | orchestrator | Friday 20 February 2026 05:40:19 +0000 (0:00:01.098) 0:44:27.465 ******* 2026-02-20 05:40:29.079611 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:40:29.079622 | orchestrator | 2026-02-20 05:40:29.079633 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 05:40:29.079644 | orchestrator | Friday 20 February 2026 05:40:21 +0000 (0:00:01.207) 0:44:28.673 ******* 2026-02-20 05:40:29.079669 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:40:29.079681 | orchestrator | 2026-02-20 05:40:29.079692 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 05:40:29.079703 | orchestrator | Friday 20 February 2026 05:40:22 +0000 (0:00:01.469) 0:44:30.143 ******* 2026-02-20 05:40:29.079714 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:40:29.079725 | orchestrator | 2026-02-20 05:40:29.079736 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 05:40:29.079747 | orchestrator | Friday 20 February 2026 05:40:24 +0000 (0:00:01.502) 0:44:31.646 ******* 2026-02-20 05:40:29.079758 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:40:29.079769 | orchestrator | 2026-02-20 05:40:29.079780 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 05:40:29.079791 | orchestrator | Friday 20 February 2026 05:40:25 +0000 (0:00:01.526) 0:44:33.173 ******* 2026-02-20 05:40:29.079802 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:40:29.079813 | orchestrator | 2026-02-20 05:40:29.079824 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 05:40:29.079844 | orchestrator | Friday 20 February 2026 05:40:26 +0000 (0:00:01.124) 0:44:34.297 ******* 2026-02-20 05:40:29.079855 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:40:29.079867 | orchestrator | 2026-02-20 05:40:29.079878 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 05:40:29.079889 | orchestrator | Friday 20 February 2026 05:40:27 +0000 (0:00:01.155) 0:44:35.453 ******* 2026-02-20 05:40:29.079904 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:40:29.079923 | orchestrator | 2026-02-20 05:40:29.079959 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 05:41:08.231698 | orchestrator | Friday 20 February 2026 05:40:29 +0000 (0:00:01.094) 0:44:36.548 ******* 2026-02-20 05:41:08.231856 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:41:08.231877 | orchestrator | 2026-02-20 05:41:08.231890 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 05:41:08.231945 | orchestrator | Friday 20 February 2026 05:40:30 +0000 (0:00:01.595) 0:44:38.143 ******* 2026-02-20 05:41:08.231959 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:41:08.231970 | orchestrator | 2026-02-20 05:41:08.231982 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 05:41:08.231993 | orchestrator | Friday 20 February 2026 05:40:32 +0000 (0:00:01.543) 0:44:39.687 ******* 2026-02-20 05:41:08.232005 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.232017 | orchestrator | 2026-02-20 05:41:08.232028 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 05:41:08.232040 | orchestrator | Friday 20 February 2026 05:40:32 +0000 (0:00:00.739) 0:44:40.427 ******* 2026-02-20 05:41:08.232051 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.232062 | orchestrator | 2026-02-20 05:41:08.232073 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 05:41:08.232084 | orchestrator | Friday 20 February 2026 05:40:33 +0000 (0:00:00.740) 0:44:41.167 ******* 2026-02-20 05:41:08.232095 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:41:08.232107 | orchestrator | 2026-02-20 05:41:08.232118 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 05:41:08.232129 | orchestrator | Friday 20 February 2026 05:40:34 +0000 (0:00:00.764) 0:44:41.932 ******* 2026-02-20 05:41:08.232140 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:41:08.232151 | orchestrator | 2026-02-20 05:41:08.232162 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 05:41:08.232173 | orchestrator | Friday 20 February 2026 05:40:35 +0000 (0:00:00.771) 0:44:42.704 ******* 2026-02-20 05:41:08.232184 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:41:08.232236 | orchestrator | 2026-02-20 05:41:08.232251 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 05:41:08.232264 | orchestrator | Friday 20 February 2026 05:40:35 +0000 (0:00:00.697) 0:44:43.401 ******* 2026-02-20 05:41:08.232277 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.232290 | orchestrator | 2026-02-20 05:41:08.232303 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 05:41:08.232316 | orchestrator | Friday 20 February 2026 05:40:36 +0000 (0:00:00.762) 0:44:44.164 ******* 2026-02-20 05:41:08.232329 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.232341 | orchestrator | 2026-02-20 05:41:08.232352 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 05:41:08.232363 | orchestrator | Friday 20 February 2026 05:40:37 +0000 (0:00:00.752) 0:44:44.917 ******* 2026-02-20 05:41:08.232374 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.232386 | orchestrator | 2026-02-20 05:41:08.232397 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 05:41:08.232408 | orchestrator | Friday 20 February 2026 05:40:38 +0000 (0:00:00.770) 0:44:45.687 ******* 2026-02-20 05:41:08.232419 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:41:08.232431 | orchestrator | 2026-02-20 05:41:08.232443 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 05:41:08.232486 | orchestrator | Friday 20 February 2026 05:40:38 +0000 (0:00:00.749) 0:44:46.437 ******* 2026-02-20 05:41:08.232503 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:41:08.232520 | orchestrator | 2026-02-20 05:41:08.232554 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 05:41:08.232572 | orchestrator | Friday 20 February 2026 05:40:39 +0000 (0:00:00.787) 0:44:47.225 ******* 2026-02-20 05:41:08.232591 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.232608 | orchestrator | 2026-02-20 05:41:08.232626 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 05:41:08.232646 | orchestrator | Friday 20 February 2026 05:40:40 +0000 (0:00:00.759) 0:44:47.984 ******* 2026-02-20 05:41:08.232666 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.232684 | orchestrator | 2026-02-20 05:41:08.232702 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 05:41:08.232720 | orchestrator | Friday 20 February 2026 05:40:41 +0000 (0:00:00.743) 0:44:48.727 ******* 2026-02-20 05:41:08.232737 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.232757 | orchestrator | 2026-02-20 05:41:08.232776 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 05:41:08.232795 | orchestrator | Friday 20 February 2026 05:40:41 +0000 (0:00:00.743) 0:44:49.471 ******* 2026-02-20 05:41:08.232815 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.232828 | orchestrator | 2026-02-20 05:41:08.232839 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 05:41:08.232850 | orchestrator | Friday 20 February 2026 05:40:42 +0000 (0:00:00.767) 0:44:50.238 ******* 2026-02-20 05:41:08.232861 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.232872 | orchestrator | 2026-02-20 05:41:08.232883 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 05:41:08.232894 | orchestrator | Friday 20 February 2026 05:40:43 +0000 (0:00:00.753) 0:44:50.991 ******* 2026-02-20 05:41:08.232905 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.232916 | orchestrator | 2026-02-20 05:41:08.232926 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 05:41:08.232937 | orchestrator | Friday 20 February 2026 05:40:44 +0000 (0:00:00.772) 0:44:51.764 ******* 2026-02-20 05:41:08.232948 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.232959 | orchestrator | 2026-02-20 05:41:08.232970 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 05:41:08.232981 | orchestrator | Friday 20 February 2026 05:40:45 +0000 (0:00:00.767) 0:44:52.531 ******* 2026-02-20 05:41:08.232991 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.233002 | orchestrator | 2026-02-20 05:41:08.233013 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 05:41:08.233024 | orchestrator | Friday 20 February 2026 05:40:45 +0000 (0:00:00.780) 0:44:53.312 ******* 2026-02-20 05:41:08.233057 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.233069 | orchestrator | 2026-02-20 05:41:08.233080 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 05:41:08.233091 | orchestrator | Friday 20 February 2026 05:40:46 +0000 (0:00:00.762) 0:44:54.074 ******* 2026-02-20 05:41:08.233101 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.233112 | orchestrator | 2026-02-20 05:41:08.233123 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 05:41:08.233134 | orchestrator | Friday 20 February 2026 05:40:47 +0000 (0:00:00.760) 0:44:54.835 ******* 2026-02-20 05:41:08.233144 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.233156 | orchestrator | 2026-02-20 05:41:08.233166 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 05:41:08.233177 | orchestrator | Friday 20 February 2026 05:40:48 +0000 (0:00:00.752) 0:44:55.588 ******* 2026-02-20 05:41:08.233213 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.233225 | orchestrator | 2026-02-20 05:41:08.233236 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 05:41:08.233261 | orchestrator | Friday 20 February 2026 05:40:48 +0000 (0:00:00.789) 0:44:56.377 ******* 2026-02-20 05:41:08.233272 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:41:08.233283 | orchestrator | 2026-02-20 05:41:08.233294 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 05:41:08.233305 | orchestrator | Friday 20 February 2026 05:40:50 +0000 (0:00:01.616) 0:44:57.994 ******* 2026-02-20 05:41:08.233316 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:41:08.233327 | orchestrator | 2026-02-20 05:41:08.233337 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 05:41:08.233348 | orchestrator | Friday 20 February 2026 05:40:52 +0000 (0:00:01.870) 0:44:59.865 ******* 2026-02-20 05:41:08.233359 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-20 05:41:08.233371 | orchestrator | 2026-02-20 05:41:08.233383 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-20 05:41:08.233393 | orchestrator | Friday 20 February 2026 05:40:53 +0000 (0:00:01.147) 0:45:01.012 ******* 2026-02-20 05:41:08.233404 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.233415 | orchestrator | 2026-02-20 05:41:08.233426 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-20 05:41:08.233437 | orchestrator | Friday 20 February 2026 05:40:54 +0000 (0:00:01.117) 0:45:02.130 ******* 2026-02-20 05:41:08.233447 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.233458 | orchestrator | 2026-02-20 05:41:08.233469 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-20 05:41:08.233479 | orchestrator | Friday 20 February 2026 05:40:55 +0000 (0:00:01.120) 0:45:03.250 ******* 2026-02-20 05:41:08.233490 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 05:41:08.233501 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 05:41:08.233511 | orchestrator | 2026-02-20 05:41:08.233522 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-20 05:41:08.233533 | orchestrator | Friday 20 February 2026 05:40:57 +0000 (0:00:01.800) 0:45:05.051 ******* 2026-02-20 05:41:08.233544 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:41:08.233555 | orchestrator | 2026-02-20 05:41:08.233566 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-20 05:41:08.233585 | orchestrator | Friday 20 February 2026 05:40:59 +0000 (0:00:01.460) 0:45:06.512 ******* 2026-02-20 05:41:08.233596 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.233607 | orchestrator | 2026-02-20 05:41:08.233618 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-20 05:41:08.233629 | orchestrator | Friday 20 February 2026 05:41:00 +0000 (0:00:01.154) 0:45:07.666 ******* 2026-02-20 05:41:08.233640 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.233651 | orchestrator | 2026-02-20 05:41:08.233661 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 05:41:08.233672 | orchestrator | Friday 20 February 2026 05:41:00 +0000 (0:00:00.802) 0:45:08.469 ******* 2026-02-20 05:41:08.233683 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.233694 | orchestrator | 2026-02-20 05:41:08.233705 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 05:41:08.233715 | orchestrator | Friday 20 February 2026 05:41:01 +0000 (0:00:00.814) 0:45:09.284 ******* 2026-02-20 05:41:08.233726 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-20 05:41:08.233737 | orchestrator | 2026-02-20 05:41:08.233748 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-20 05:41:08.233758 | orchestrator | Friday 20 February 2026 05:41:02 +0000 (0:00:01.165) 0:45:10.449 ******* 2026-02-20 05:41:08.233769 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:41:08.233780 | orchestrator | 2026-02-20 05:41:08.233791 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-20 05:41:08.233810 | orchestrator | Friday 20 February 2026 05:41:04 +0000 (0:00:01.820) 0:45:12.270 ******* 2026-02-20 05:41:08.233821 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 05:41:08.233831 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 05:41:08.233842 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 05:41:08.233853 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.233864 | orchestrator | 2026-02-20 05:41:08.233874 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-20 05:41:08.233885 | orchestrator | Friday 20 February 2026 05:41:05 +0000 (0:00:01.137) 0:45:13.408 ******* 2026-02-20 05:41:08.233896 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:08.233907 | orchestrator | 2026-02-20 05:41:08.233918 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-20 05:41:08.233937 | orchestrator | Friday 20 February 2026 05:41:07 +0000 (0:00:01.136) 0:45:14.544 ******* 2026-02-20 05:41:08.233967 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.118561 | orchestrator | 2026-02-20 05:41:51.118661 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-20 05:41:51.118673 | orchestrator | Friday 20 February 2026 05:41:08 +0000 (0:00:01.159) 0:45:15.703 ******* 2026-02-20 05:41:51.118681 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.118689 | orchestrator | 2026-02-20 05:41:51.118696 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-20 05:41:51.118703 | orchestrator | Friday 20 February 2026 05:41:09 +0000 (0:00:01.138) 0:45:16.842 ******* 2026-02-20 05:41:51.118710 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.118717 | orchestrator | 2026-02-20 05:41:51.118724 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-20 05:41:51.118730 | orchestrator | Friday 20 February 2026 05:41:10 +0000 (0:00:01.189) 0:45:18.032 ******* 2026-02-20 05:41:51.118737 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.118744 | orchestrator | 2026-02-20 05:41:51.118751 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 05:41:51.118758 | orchestrator | Friday 20 February 2026 05:41:11 +0000 (0:00:00.793) 0:45:18.825 ******* 2026-02-20 05:41:51.118765 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:41:51.118772 | orchestrator | 2026-02-20 05:41:51.118779 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 05:41:51.118786 | orchestrator | Friday 20 February 2026 05:41:13 +0000 (0:00:02.142) 0:45:20.968 ******* 2026-02-20 05:41:51.118793 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:41:51.118800 | orchestrator | 2026-02-20 05:41:51.118807 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 05:41:51.118814 | orchestrator | Friday 20 February 2026 05:41:14 +0000 (0:00:00.776) 0:45:21.744 ******* 2026-02-20 05:41:51.118821 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-20 05:41:51.118828 | orchestrator | 2026-02-20 05:41:51.118834 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-20 05:41:51.118841 | orchestrator | Friday 20 February 2026 05:41:15 +0000 (0:00:01.319) 0:45:23.063 ******* 2026-02-20 05:41:51.118847 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.118854 | orchestrator | 2026-02-20 05:41:51.118861 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-20 05:41:51.118868 | orchestrator | Friday 20 February 2026 05:41:16 +0000 (0:00:01.141) 0:45:24.205 ******* 2026-02-20 05:41:51.118874 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.118881 | orchestrator | 2026-02-20 05:41:51.118888 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-20 05:41:51.118894 | orchestrator | Friday 20 February 2026 05:41:17 +0000 (0:00:01.123) 0:45:25.329 ******* 2026-02-20 05:41:51.118901 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.118927 | orchestrator | 2026-02-20 05:41:51.118935 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-20 05:41:51.118941 | orchestrator | Friday 20 February 2026 05:41:18 +0000 (0:00:01.125) 0:45:26.454 ******* 2026-02-20 05:41:51.118948 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.118955 | orchestrator | 2026-02-20 05:41:51.118962 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-20 05:41:51.118968 | orchestrator | Friday 20 February 2026 05:41:20 +0000 (0:00:01.134) 0:45:27.588 ******* 2026-02-20 05:41:51.118975 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.118982 | orchestrator | 2026-02-20 05:41:51.119000 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-20 05:41:51.119007 | orchestrator | Friday 20 February 2026 05:41:21 +0000 (0:00:01.137) 0:45:28.726 ******* 2026-02-20 05:41:51.119013 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.119020 | orchestrator | 2026-02-20 05:41:51.119112 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-20 05:41:51.119123 | orchestrator | Friday 20 February 2026 05:41:22 +0000 (0:00:01.134) 0:45:29.860 ******* 2026-02-20 05:41:51.119132 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.119140 | orchestrator | 2026-02-20 05:41:51.119148 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-20 05:41:51.119156 | orchestrator | Friday 20 February 2026 05:41:23 +0000 (0:00:01.135) 0:45:30.996 ******* 2026-02-20 05:41:51.119164 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.119172 | orchestrator | 2026-02-20 05:41:51.119179 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-20 05:41:51.119187 | orchestrator | Friday 20 February 2026 05:41:24 +0000 (0:00:01.140) 0:45:32.137 ******* 2026-02-20 05:41:51.119195 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:41:51.119203 | orchestrator | 2026-02-20 05:41:51.119210 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 05:41:51.119219 | orchestrator | Friday 20 February 2026 05:41:25 +0000 (0:00:00.781) 0:45:32.918 ******* 2026-02-20 05:41:51.119227 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-20 05:41:51.119236 | orchestrator | 2026-02-20 05:41:51.119244 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-20 05:41:51.119252 | orchestrator | Friday 20 February 2026 05:41:26 +0000 (0:00:01.111) 0:45:34.030 ******* 2026-02-20 05:41:51.119259 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-20 05:41:51.119266 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-20 05:41:51.119273 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-20 05:41:51.119280 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-20 05:41:51.119286 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-20 05:41:51.119293 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-20 05:41:51.119300 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-20 05:41:51.119307 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-20 05:41:51.119314 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 05:41:51.119335 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 05:41:51.119342 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 05:41:51.119349 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 05:41:51.119355 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 05:41:51.119362 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 05:41:51.119369 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-20 05:41:51.119375 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-20 05:41:51.119382 | orchestrator | 2026-02-20 05:41:51.119389 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 05:41:51.119403 | orchestrator | Friday 20 February 2026 05:41:32 +0000 (0:00:06.421) 0:45:40.452 ******* 2026-02-20 05:41:51.119410 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-20 05:41:51.119417 | orchestrator | 2026-02-20 05:41:51.119424 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-20 05:41:51.119430 | orchestrator | Friday 20 February 2026 05:41:34 +0000 (0:00:01.146) 0:45:41.598 ******* 2026-02-20 05:41:51.119437 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 05:41:51.119445 | orchestrator | 2026-02-20 05:41:51.119452 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-20 05:41:51.119458 | orchestrator | Friday 20 February 2026 05:41:35 +0000 (0:00:01.484) 0:45:43.082 ******* 2026-02-20 05:41:51.119465 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 05:41:51.119472 | orchestrator | 2026-02-20 05:41:51.119479 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 05:41:51.119485 | orchestrator | Friday 20 February 2026 05:41:37 +0000 (0:00:01.649) 0:45:44.732 ******* 2026-02-20 05:41:51.119492 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.119499 | orchestrator | 2026-02-20 05:41:51.119506 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 05:41:51.119512 | orchestrator | Friday 20 February 2026 05:41:38 +0000 (0:00:00.795) 0:45:45.527 ******* 2026-02-20 05:41:51.119519 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.119526 | orchestrator | 2026-02-20 05:41:51.119533 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 05:41:51.119539 | orchestrator | Friday 20 February 2026 05:41:38 +0000 (0:00:00.744) 0:45:46.272 ******* 2026-02-20 05:41:51.119546 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.119553 | orchestrator | 2026-02-20 05:41:51.119560 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 05:41:51.119566 | orchestrator | Friday 20 February 2026 05:41:39 +0000 (0:00:00.754) 0:45:47.027 ******* 2026-02-20 05:41:51.119573 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.119580 | orchestrator | 2026-02-20 05:41:51.119587 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 05:41:51.119593 | orchestrator | Friday 20 February 2026 05:41:40 +0000 (0:00:00.805) 0:45:47.832 ******* 2026-02-20 05:41:51.119600 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.119611 | orchestrator | 2026-02-20 05:41:51.119618 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 05:41:51.119625 | orchestrator | Friday 20 February 2026 05:41:41 +0000 (0:00:00.756) 0:45:48.589 ******* 2026-02-20 05:41:51.119631 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.119638 | orchestrator | 2026-02-20 05:41:51.119645 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 05:41:51.119652 | orchestrator | Friday 20 February 2026 05:41:41 +0000 (0:00:00.759) 0:45:49.348 ******* 2026-02-20 05:41:51.119658 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.119665 | orchestrator | 2026-02-20 05:41:51.119672 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 05:41:51.119679 | orchestrator | Friday 20 February 2026 05:41:42 +0000 (0:00:00.750) 0:45:50.099 ******* 2026-02-20 05:41:51.119685 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.119692 | orchestrator | 2026-02-20 05:41:51.119699 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 05:41:51.119705 | orchestrator | Friday 20 February 2026 05:41:43 +0000 (0:00:00.790) 0:45:50.890 ******* 2026-02-20 05:41:51.119712 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.119723 | orchestrator | 2026-02-20 05:41:51.119730 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 05:41:51.119737 | orchestrator | Friday 20 February 2026 05:41:44 +0000 (0:00:00.771) 0:45:51.661 ******* 2026-02-20 05:41:51.119743 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:41:51.119750 | orchestrator | 2026-02-20 05:41:51.119757 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 05:41:51.119763 | orchestrator | Friday 20 February 2026 05:41:44 +0000 (0:00:00.749) 0:45:52.410 ******* 2026-02-20 05:41:51.119770 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:41:51.119777 | orchestrator | 2026-02-20 05:41:51.119784 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 05:41:51.119790 | orchestrator | Friday 20 February 2026 05:41:45 +0000 (0:00:00.849) 0:45:53.260 ******* 2026-02-20 05:41:51.119797 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-20 05:41:51.119804 | orchestrator | 2026-02-20 05:41:51.119811 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 05:41:51.119817 | orchestrator | Friday 20 February 2026 05:41:50 +0000 (0:00:04.494) 0:45:57.754 ******* 2026-02-20 05:41:51.119829 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 05:42:32.988548 | orchestrator | 2026-02-20 05:42:32.988680 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 05:42:32.988698 | orchestrator | Friday 20 February 2026 05:41:51 +0000 (0:00:00.837) 0:45:58.592 ******* 2026-02-20 05:42:32.988711 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-20 05:42:32.988724 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-20 05:42:32.988736 | orchestrator | 2026-02-20 05:42:32.988747 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 05:42:32.988757 | orchestrator | Friday 20 February 2026 05:41:59 +0000 (0:00:08.033) 0:46:06.626 ******* 2026-02-20 05:42:32.988767 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:42:32.988778 | orchestrator | 2026-02-20 05:42:32.988788 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 05:42:32.988798 | orchestrator | Friday 20 February 2026 05:41:59 +0000 (0:00:00.766) 0:46:07.392 ******* 2026-02-20 05:42:32.988808 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:42:32.988818 | orchestrator | 2026-02-20 05:42:32.988828 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:42:32.988839 | orchestrator | Friday 20 February 2026 05:42:00 +0000 (0:00:00.817) 0:46:08.210 ******* 2026-02-20 05:42:32.988849 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:42:32.988859 | orchestrator | 2026-02-20 05:42:32.988869 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:42:32.988879 | orchestrator | Friday 20 February 2026 05:42:01 +0000 (0:00:00.777) 0:46:08.987 ******* 2026-02-20 05:42:32.988987 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:42:32.988999 | orchestrator | 2026-02-20 05:42:32.989009 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:42:32.989020 | orchestrator | Friday 20 February 2026 05:42:02 +0000 (0:00:00.814) 0:46:09.802 ******* 2026-02-20 05:42:32.989030 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:42:32.989040 | orchestrator | 2026-02-20 05:42:32.989050 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:42:32.989085 | orchestrator | Friday 20 February 2026 05:42:03 +0000 (0:00:00.786) 0:46:10.589 ******* 2026-02-20 05:42:32.989098 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:42:32.989110 | orchestrator | 2026-02-20 05:42:32.989122 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:42:32.989147 | orchestrator | Friday 20 February 2026 05:42:03 +0000 (0:00:00.877) 0:46:11.466 ******* 2026-02-20 05:42:32.989159 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 05:42:32.989171 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 05:42:32.989182 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 05:42:32.989194 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:42:32.989205 | orchestrator | 2026-02-20 05:42:32.989217 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:42:32.989228 | orchestrator | Friday 20 February 2026 05:42:05 +0000 (0:00:01.416) 0:46:12.882 ******* 2026-02-20 05:42:32.989239 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 05:42:32.989251 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 05:42:32.989263 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 05:42:32.989274 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:42:32.989286 | orchestrator | 2026-02-20 05:42:32.989297 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:42:32.989309 | orchestrator | Friday 20 February 2026 05:42:06 +0000 (0:00:01.371) 0:46:14.253 ******* 2026-02-20 05:42:32.989320 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 05:42:32.989331 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 05:42:32.989342 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 05:42:32.989353 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:42:32.989365 | orchestrator | 2026-02-20 05:42:32.989376 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:42:32.989389 | orchestrator | Friday 20 February 2026 05:42:07 +0000 (0:00:01.059) 0:46:15.313 ******* 2026-02-20 05:42:32.989400 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:42:32.989411 | orchestrator | 2026-02-20 05:42:32.989422 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:42:32.989434 | orchestrator | Friday 20 February 2026 05:42:08 +0000 (0:00:00.816) 0:46:16.130 ******* 2026-02-20 05:42:32.989445 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-20 05:42:32.989456 | orchestrator | 2026-02-20 05:42:32.989465 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 05:42:32.989475 | orchestrator | Friday 20 February 2026 05:42:09 +0000 (0:00:00.976) 0:46:17.106 ******* 2026-02-20 05:42:32.989485 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:42:32.989495 | orchestrator | 2026-02-20 05:42:32.989505 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-20 05:42:32.989515 | orchestrator | Friday 20 February 2026 05:42:11 +0000 (0:00:01.433) 0:46:18.540 ******* 2026-02-20 05:42:32.989524 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:42:32.989534 | orchestrator | 2026-02-20 05:42:32.989560 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-20 05:42:32.989571 | orchestrator | Friday 20 February 2026 05:42:11 +0000 (0:00:00.758) 0:46:19.298 ******* 2026-02-20 05:42:32.989581 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:42:32.989591 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:42:32.989601 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:42:32.989611 | orchestrator | 2026-02-20 05:42:32.989621 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-20 05:42:32.989630 | orchestrator | Friday 20 February 2026 05:42:13 +0000 (0:00:01.586) 0:46:20.885 ******* 2026-02-20 05:42:32.989656 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-02-20 05:42:32.989680 | orchestrator | 2026-02-20 05:42:32.989698 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-20 05:42:32.989714 | orchestrator | Friday 20 February 2026 05:42:14 +0000 (0:00:01.106) 0:46:21.992 ******* 2026-02-20 05:42:32.989729 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:42:32.989745 | orchestrator | 2026-02-20 05:42:32.989760 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-20 05:42:32.989777 | orchestrator | Friday 20 February 2026 05:42:15 +0000 (0:00:01.160) 0:46:23.152 ******* 2026-02-20 05:42:32.989794 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:42:32.989809 | orchestrator | 2026-02-20 05:42:32.989824 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-20 05:42:32.989839 | orchestrator | Friday 20 February 2026 05:42:16 +0000 (0:00:01.095) 0:46:24.248 ******* 2026-02-20 05:42:32.989855 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:42:32.989869 | orchestrator | 2026-02-20 05:42:32.989910 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-20 05:42:32.989928 | orchestrator | Friday 20 February 2026 05:42:18 +0000 (0:00:01.452) 0:46:25.700 ******* 2026-02-20 05:42:32.989944 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:42:32.989959 | orchestrator | 2026-02-20 05:42:32.989975 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-20 05:42:32.989991 | orchestrator | Friday 20 February 2026 05:42:19 +0000 (0:00:01.148) 0:46:26.849 ******* 2026-02-20 05:42:32.990008 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-20 05:42:32.990100 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-20 05:42:32.990111 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-20 05:42:32.990122 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-20 05:42:32.990139 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-20 05:42:32.990162 | orchestrator | 2026-02-20 05:42:32.990181 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-20 05:42:32.990196 | orchestrator | Friday 20 February 2026 05:42:21 +0000 (0:00:02.552) 0:46:29.402 ******* 2026-02-20 05:42:32.990222 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:42:32.990238 | orchestrator | 2026-02-20 05:42:32.990254 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-20 05:42:32.990270 | orchestrator | Friday 20 February 2026 05:42:22 +0000 (0:00:00.747) 0:46:30.149 ******* 2026-02-20 05:42:32.990285 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-02-20 05:42:32.990302 | orchestrator | 2026-02-20 05:42:32.990320 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-20 05:42:32.990336 | orchestrator | Friday 20 February 2026 05:42:23 +0000 (0:00:01.116) 0:46:31.266 ******* 2026-02-20 05:42:32.990352 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-20 05:42:32.990363 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-20 05:42:32.990373 | orchestrator | 2026-02-20 05:42:32.990383 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-20 05:42:32.990392 | orchestrator | Friday 20 February 2026 05:42:25 +0000 (0:00:01.837) 0:46:33.104 ******* 2026-02-20 05:42:32.990402 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 05:42:32.990412 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-20 05:42:32.990421 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 05:42:32.990431 | orchestrator | 2026-02-20 05:42:32.990441 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-20 05:42:32.990451 | orchestrator | Friday 20 February 2026 05:42:28 +0000 (0:00:03.276) 0:46:36.381 ******* 2026-02-20 05:42:32.990471 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-20 05:42:32.990481 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-20 05:42:32.990491 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:42:32.990501 | orchestrator | 2026-02-20 05:42:32.990511 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-20 05:42:32.990521 | orchestrator | Friday 20 February 2026 05:42:30 +0000 (0:00:01.655) 0:46:38.037 ******* 2026-02-20 05:42:32.990530 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:42:32.990540 | orchestrator | 2026-02-20 05:42:32.990550 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-20 05:42:32.990562 | orchestrator | Friday 20 February 2026 05:42:31 +0000 (0:00:00.877) 0:46:38.914 ******* 2026-02-20 05:42:32.990579 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:42:32.990595 | orchestrator | 2026-02-20 05:42:32.990611 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-20 05:42:32.990628 | orchestrator | Friday 20 February 2026 05:42:32 +0000 (0:00:00.780) 0:46:39.695 ******* 2026-02-20 05:42:32.990644 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:42:32.990662 | orchestrator | 2026-02-20 05:42:32.990693 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-20 05:44:54.822271 | orchestrator | Friday 20 February 2026 05:42:32 +0000 (0:00:00.765) 0:46:40.460 ******* 2026-02-20 05:44:54.822416 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-02-20 05:44:54.822436 | orchestrator | 2026-02-20 05:44:54.822449 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-20 05:44:54.822510 | orchestrator | Friday 20 February 2026 05:42:34 +0000 (0:00:01.329) 0:46:41.789 ******* 2026-02-20 05:44:54.822522 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:44:54.822539 | orchestrator | 2026-02-20 05:44:54.822551 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-20 05:44:54.822562 | orchestrator | Friday 20 February 2026 05:42:35 +0000 (0:00:01.474) 0:46:43.264 ******* 2026-02-20 05:44:54.822573 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:44:54.822584 | orchestrator | 2026-02-20 05:44:54.822596 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-20 05:44:54.822607 | orchestrator | Friday 20 February 2026 05:42:39 +0000 (0:00:03.411) 0:46:46.676 ******* 2026-02-20 05:44:54.822618 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-02-20 05:44:54.822629 | orchestrator | 2026-02-20 05:44:54.822640 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-20 05:44:54.822651 | orchestrator | Friday 20 February 2026 05:42:40 +0000 (0:00:01.195) 0:46:47.871 ******* 2026-02-20 05:44:54.822662 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:44:54.822674 | orchestrator | 2026-02-20 05:44:54.822685 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-20 05:44:54.822696 | orchestrator | Friday 20 February 2026 05:42:42 +0000 (0:00:02.014) 0:46:49.886 ******* 2026-02-20 05:44:54.822707 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:44:54.822718 | orchestrator | 2026-02-20 05:44:54.822729 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-20 05:44:54.822739 | orchestrator | Friday 20 February 2026 05:42:44 +0000 (0:00:01.918) 0:46:51.805 ******* 2026-02-20 05:44:54.822750 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:44:54.822761 | orchestrator | 2026-02-20 05:44:54.822772 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-20 05:44:54.822783 | orchestrator | Friday 20 February 2026 05:42:46 +0000 (0:00:02.285) 0:46:54.090 ******* 2026-02-20 05:44:54.822794 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:44:54.822807 | orchestrator | 2026-02-20 05:44:54.822818 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-20 05:44:54.822829 | orchestrator | Friday 20 February 2026 05:42:47 +0000 (0:00:01.112) 0:46:55.203 ******* 2026-02-20 05:44:54.822865 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:44:54.822877 | orchestrator | 2026-02-20 05:44:54.822888 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-20 05:44:54.822899 | orchestrator | Friday 20 February 2026 05:42:48 +0000 (0:00:01.124) 0:46:56.328 ******* 2026-02-20 05:44:54.822910 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-20 05:44:54.822921 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-02-20 05:44:54.822932 | orchestrator | 2026-02-20 05:44:54.822943 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-20 05:44:54.822968 | orchestrator | Friday 20 February 2026 05:42:50 +0000 (0:00:01.870) 0:46:58.198 ******* 2026-02-20 05:44:54.822979 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-20 05:44:54.822990 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-02-20 05:44:54.823001 | orchestrator | 2026-02-20 05:44:54.823012 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-20 05:44:54.823023 | orchestrator | Friday 20 February 2026 05:42:53 +0000 (0:00:02.966) 0:47:01.165 ******* 2026-02-20 05:44:54.823034 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-20 05:44:54.823045 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-20 05:44:54.823055 | orchestrator | 2026-02-20 05:44:54.823066 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-20 05:44:54.823077 | orchestrator | Friday 20 February 2026 05:42:57 +0000 (0:00:04.253) 0:47:05.419 ******* 2026-02-20 05:44:54.823088 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:44:54.823099 | orchestrator | 2026-02-20 05:44:54.823110 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-20 05:44:54.823121 | orchestrator | Friday 20 February 2026 05:42:58 +0000 (0:00:00.880) 0:47:06.299 ******* 2026-02-20 05:44:54.823132 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-20 05:44:54.823143 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:44:54.823154 | orchestrator | 2026-02-20 05:44:54.823165 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-20 05:44:54.823175 | orchestrator | Friday 20 February 2026 05:43:12 +0000 (0:00:13.518) 0:47:19.817 ******* 2026-02-20 05:44:54.823186 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:44:54.823197 | orchestrator | 2026-02-20 05:44:54.823208 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-20 05:44:54.823219 | orchestrator | Friday 20 February 2026 05:43:13 +0000 (0:00:00.898) 0:47:20.716 ******* 2026-02-20 05:44:54.823230 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:44:54.823240 | orchestrator | 2026-02-20 05:44:54.823251 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-20 05:44:54.823262 | orchestrator | Friday 20 February 2026 05:43:13 +0000 (0:00:00.752) 0:47:21.469 ******* 2026-02-20 05:44:54.823273 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:44:54.823284 | orchestrator | 2026-02-20 05:44:54.823295 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-20 05:44:54.823306 | orchestrator | Friday 20 February 2026 05:43:14 +0000 (0:00:00.751) 0:47:22.221 ******* 2026-02-20 05:44:54.823316 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-20 05:44:54.823328 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:44:54.823338 | orchestrator | 2026-02-20 05:44:54.823367 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-02-20 05:44:54.823379 | orchestrator | 2026-02-20 05:44:54.823390 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:44:54.823401 | orchestrator | Friday 20 February 2026 05:43:20 +0000 (0:00:05.527) 0:47:27.749 ******* 2026-02-20 05:44:54.823412 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:44:54.823422 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:44:54.823433 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:44:54.823452 | orchestrator | 2026-02-20 05:44:54.823482 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:44:54.823494 | orchestrator | Friday 20 February 2026 05:43:22 +0000 (0:00:01.783) 0:47:29.533 ******* 2026-02-20 05:44:54.823504 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:44:54.823515 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:44:54.823526 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:44:54.823537 | orchestrator | 2026-02-20 05:44:54.823548 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-02-20 05:44:54.823558 | orchestrator | Friday 20 February 2026 05:43:23 +0000 (0:00:01.636) 0:47:31.170 ******* 2026-02-20 05:44:54.823569 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-20 05:44:54.823580 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-20 05:44:54.823591 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-20 05:44:54.823602 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-20 05:44:54.823614 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-20 05:44:54.823625 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-20 05:44:54.823636 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-20 05:44:54.823647 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-20 05:44:54.823658 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-20 05:44:54.823669 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-20 05:44:54.823680 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-20 05:44:54.823691 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-20 05:44:54.823702 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-20 05:44:54.823718 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-20 05:44:54.823729 | orchestrator | 2026-02-20 05:44:54.823740 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-02-20 05:44:54.823751 | orchestrator | Friday 20 February 2026 05:44:38 +0000 (0:01:15.115) 0:48:46.285 ******* 2026-02-20 05:44:54.823762 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-20 05:44:54.823773 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-20 05:44:54.823783 | orchestrator | 2026-02-20 05:44:54.823794 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-02-20 05:44:54.823805 | orchestrator | Friday 20 February 2026 05:44:44 +0000 (0:00:05.256) 0:48:51.542 ******* 2026-02-20 05:44:54.823816 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:44:54.823826 | orchestrator | 2026-02-20 05:44:54.823837 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-02-20 05:44:54.823848 | orchestrator | 2026-02-20 05:44:54.823858 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:44:54.823869 | orchestrator | Friday 20 February 2026 05:44:47 +0000 (0:00:03.260) 0:48:54.802 ******* 2026-02-20 05:44:54.823880 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-20 05:44:54.823891 | orchestrator | 2026-02-20 05:44:54.823901 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 05:44:54.823912 | orchestrator | Friday 20 February 2026 05:44:48 +0000 (0:00:01.148) 0:48:55.950 ******* 2026-02-20 05:44:54.823929 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:44:54.823940 | orchestrator | 2026-02-20 05:44:54.823951 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 05:44:54.823962 | orchestrator | Friday 20 February 2026 05:44:49 +0000 (0:00:01.506) 0:48:57.457 ******* 2026-02-20 05:44:54.823973 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:44:54.823984 | orchestrator | 2026-02-20 05:44:54.823994 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:44:54.824005 | orchestrator | Friday 20 February 2026 05:44:51 +0000 (0:00:01.142) 0:48:58.600 ******* 2026-02-20 05:44:54.824016 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:44:54.824027 | orchestrator | 2026-02-20 05:44:54.824037 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:44:54.824048 | orchestrator | Friday 20 February 2026 05:44:52 +0000 (0:00:01.447) 0:49:00.047 ******* 2026-02-20 05:44:54.824059 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:44:54.824070 | orchestrator | 2026-02-20 05:44:54.824080 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 05:44:54.824091 | orchestrator | Friday 20 February 2026 05:44:53 +0000 (0:00:01.137) 0:49:01.184 ******* 2026-02-20 05:44:54.824109 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:45:19.867328 | orchestrator | 2026-02-20 05:45:19.867534 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 05:45:19.867565 | orchestrator | Friday 20 February 2026 05:44:54 +0000 (0:00:01.111) 0:49:02.296 ******* 2026-02-20 05:45:19.867584 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:45:19.867604 | orchestrator | 2026-02-20 05:45:19.867624 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 05:45:19.867642 | orchestrator | Friday 20 February 2026 05:44:55 +0000 (0:00:01.126) 0:49:03.423 ******* 2026-02-20 05:45:19.867659 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:45:19.867678 | orchestrator | 2026-02-20 05:45:19.867696 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 05:45:19.867715 | orchestrator | Friday 20 February 2026 05:44:57 +0000 (0:00:01.124) 0:49:04.547 ******* 2026-02-20 05:45:19.867735 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:45:19.867755 | orchestrator | 2026-02-20 05:45:19.867776 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 05:45:19.867789 | orchestrator | Friday 20 February 2026 05:44:58 +0000 (0:00:01.102) 0:49:05.650 ******* 2026-02-20 05:45:19.867801 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:45:19.867812 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:45:19.867823 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:45:19.867835 | orchestrator | 2026-02-20 05:45:19.867846 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 05:45:19.867859 | orchestrator | Friday 20 February 2026 05:44:59 +0000 (0:00:01.647) 0:49:07.297 ******* 2026-02-20 05:45:19.867871 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:45:19.867884 | orchestrator | 2026-02-20 05:45:19.867896 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 05:45:19.867909 | orchestrator | Friday 20 February 2026 05:45:01 +0000 (0:00:01.278) 0:49:08.576 ******* 2026-02-20 05:45:19.867922 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:45:19.867935 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:45:19.867947 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:45:19.867960 | orchestrator | 2026-02-20 05:45:19.867972 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 05:45:19.867985 | orchestrator | Friday 20 February 2026 05:45:04 +0000 (0:00:03.243) 0:49:11.819 ******* 2026-02-20 05:45:19.867998 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 05:45:19.868034 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 05:45:19.868048 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 05:45:19.868061 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:45:19.868074 | orchestrator | 2026-02-20 05:45:19.868085 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 05:45:19.868096 | orchestrator | Friday 20 February 2026 05:45:05 +0000 (0:00:01.454) 0:49:13.274 ******* 2026-02-20 05:45:19.868123 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 05:45:19.868139 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 05:45:19.868150 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 05:45:19.868168 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:45:19.868188 | orchestrator | 2026-02-20 05:45:19.868207 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 05:45:19.868224 | orchestrator | Friday 20 February 2026 05:45:07 +0000 (0:00:01.992) 0:49:15.267 ******* 2026-02-20 05:45:19.868248 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:45:19.868269 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:45:19.868309 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:45:19.868322 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:45:19.868334 | orchestrator | 2026-02-20 05:45:19.868348 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 05:45:19.868366 | orchestrator | Friday 20 February 2026 05:45:09 +0000 (0:00:01.236) 0:49:16.503 ******* 2026-02-20 05:45:19.868387 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'fa7fdf54d9d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 05:45:01.618138', 'end': '2026-02-20 05:45:01.663133', 'delta': '0:00:00.044995', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fa7fdf54d9d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 05:45:19.868453 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9edb2d04dfb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 05:45:02.199512', 'end': '2026-02-20 05:45:02.236740', 'delta': '0:00:00.037228', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9edb2d04dfb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 05:45:19.868480 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '18841505eff4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 05:45:03.125422', 'end': '2026-02-20 05:45:03.172105', 'delta': '0:00:00.046683', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['18841505eff4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 05:45:19.868492 | orchestrator | 2026-02-20 05:45:19.868504 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 05:45:19.868515 | orchestrator | Friday 20 February 2026 05:45:10 +0000 (0:00:01.270) 0:49:17.774 ******* 2026-02-20 05:45:19.868526 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:45:19.868537 | orchestrator | 2026-02-20 05:45:19.868548 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 05:45:19.868559 | orchestrator | Friday 20 February 2026 05:45:11 +0000 (0:00:01.665) 0:49:19.439 ******* 2026-02-20 05:45:19.868570 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:45:19.868581 | orchestrator | 2026-02-20 05:45:19.868592 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 05:45:19.868602 | orchestrator | Friday 20 February 2026 05:45:13 +0000 (0:00:01.254) 0:49:20.694 ******* 2026-02-20 05:45:19.868613 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:45:19.868624 | orchestrator | 2026-02-20 05:45:19.868635 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 05:45:19.868646 | orchestrator | Friday 20 February 2026 05:45:14 +0000 (0:00:01.131) 0:49:21.826 ******* 2026-02-20 05:45:19.868657 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:45:19.868668 | orchestrator | 2026-02-20 05:45:19.868678 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:45:19.868689 | orchestrator | Friday 20 February 2026 05:45:16 +0000 (0:00:02.021) 0:49:23.848 ******* 2026-02-20 05:45:19.868700 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:45:19.868711 | orchestrator | 2026-02-20 05:45:19.868722 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 05:45:19.868733 | orchestrator | Friday 20 February 2026 05:45:17 +0000 (0:00:01.132) 0:49:24.981 ******* 2026-02-20 05:45:19.868744 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:45:19.868754 | orchestrator | 2026-02-20 05:45:19.868765 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 05:45:19.868776 | orchestrator | Friday 20 February 2026 05:45:18 +0000 (0:00:01.141) 0:49:26.122 ******* 2026-02-20 05:45:19.868796 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:45:30.343093 | orchestrator | 2026-02-20 05:45:30.343194 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:45:30.343204 | orchestrator | Friday 20 February 2026 05:45:19 +0000 (0:00:01.218) 0:49:27.341 ******* 2026-02-20 05:45:30.343210 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:45:30.343234 | orchestrator | 2026-02-20 05:45:30.343239 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 05:45:30.343244 | orchestrator | Friday 20 February 2026 05:45:21 +0000 (0:00:01.159) 0:49:28.501 ******* 2026-02-20 05:45:30.343249 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:45:30.343253 | orchestrator | 2026-02-20 05:45:30.343258 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 05:45:30.343263 | orchestrator | Friday 20 February 2026 05:45:22 +0000 (0:00:01.114) 0:49:29.616 ******* 2026-02-20 05:45:30.343268 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:45:30.343272 | orchestrator | 2026-02-20 05:45:30.343277 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 05:45:30.343282 | orchestrator | Friday 20 February 2026 05:45:23 +0000 (0:00:01.137) 0:49:30.753 ******* 2026-02-20 05:45:30.343287 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:45:30.343292 | orchestrator | 2026-02-20 05:45:30.343297 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 05:45:30.343301 | orchestrator | Friday 20 February 2026 05:45:24 +0000 (0:00:01.151) 0:49:31.905 ******* 2026-02-20 05:45:30.343306 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:45:30.343310 | orchestrator | 2026-02-20 05:45:30.343315 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 05:45:30.343320 | orchestrator | Friday 20 February 2026 05:45:25 +0000 (0:00:01.113) 0:49:33.019 ******* 2026-02-20 05:45:30.343324 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:45:30.343329 | orchestrator | 2026-02-20 05:45:30.343334 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 05:45:30.343339 | orchestrator | Friday 20 February 2026 05:45:26 +0000 (0:00:01.126) 0:49:34.146 ******* 2026-02-20 05:45:30.343344 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:45:30.343349 | orchestrator | 2026-02-20 05:45:30.343353 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 05:45:30.343358 | orchestrator | Friday 20 February 2026 05:45:27 +0000 (0:00:01.130) 0:49:35.276 ******* 2026-02-20 05:45:30.343364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:45:30.343437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:45:30.343445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:45:30.343452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 05:45:30.343464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:45:30.343482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:45:30.343488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:45:30.343499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0c1d2133', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:45:30.343506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:45:30.343511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:45:30.343519 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:45:30.343524 | orchestrator | 2026-02-20 05:45:30.343529 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 05:45:30.343534 | orchestrator | Friday 20 February 2026 05:45:29 +0000 (0:00:01.256) 0:49:36.533 ******* 2026-02-20 05:45:30.343542 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:45:34.471269 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:45:34.471352 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:45:34.471414 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:45:34.471424 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:45:34.471447 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:45:34.471453 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:45:34.471480 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0c1d2133', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c1d2133-543d-47a1-9a8f-77b9e889b460-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:45:34.471489 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:45:34.471500 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:45:34.471507 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:45:34.471514 | orchestrator | 2026-02-20 05:45:34.471521 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 05:45:34.471528 | orchestrator | Friday 20 February 2026 05:45:30 +0000 (0:00:01.285) 0:49:37.819 ******* 2026-02-20 05:45:34.471534 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:45:34.471546 | orchestrator | 2026-02-20 05:45:34.471555 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 05:45:34.471564 | orchestrator | Friday 20 February 2026 05:45:31 +0000 (0:00:01.505) 0:49:39.324 ******* 2026-02-20 05:45:34.471572 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:45:34.471582 | orchestrator | 2026-02-20 05:45:34.471591 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:45:34.471600 | orchestrator | Friday 20 February 2026 05:45:32 +0000 (0:00:01.114) 0:49:40.438 ******* 2026-02-20 05:45:34.471610 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:45:34.471619 | orchestrator | 2026-02-20 05:45:34.471629 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:45:34.471645 | orchestrator | Friday 20 February 2026 05:45:34 +0000 (0:00:01.507) 0:49:41.946 ******* 2026-02-20 05:46:29.534125 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:46:29.534291 | orchestrator | 2026-02-20 05:46:29.534312 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:46:29.534326 | orchestrator | Friday 20 February 2026 05:45:35 +0000 (0:00:01.119) 0:49:43.066 ******* 2026-02-20 05:46:29.534338 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:46:29.534349 | orchestrator | 2026-02-20 05:46:29.534361 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:46:29.534372 | orchestrator | Friday 20 February 2026 05:45:36 +0000 (0:00:01.222) 0:49:44.288 ******* 2026-02-20 05:46:29.534383 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:46:29.534395 | orchestrator | 2026-02-20 05:46:29.534428 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 05:46:29.534462 | orchestrator | Friday 20 February 2026 05:45:37 +0000 (0:00:01.129) 0:49:45.417 ******* 2026-02-20 05:46:29.534481 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:46:29.534500 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-20 05:46:29.534519 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-20 05:46:29.534531 | orchestrator | 2026-02-20 05:46:29.534541 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 05:46:29.534553 | orchestrator | Friday 20 February 2026 05:45:39 +0000 (0:00:01.956) 0:49:47.373 ******* 2026-02-20 05:46:29.534564 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-20 05:46:29.534575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-20 05:46:29.534586 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-20 05:46:29.534599 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:46:29.534612 | orchestrator | 2026-02-20 05:46:29.534625 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 05:46:29.534637 | orchestrator | Friday 20 February 2026 05:45:41 +0000 (0:00:01.218) 0:49:48.592 ******* 2026-02-20 05:46:29.534676 | orchestrator | skipping: [testbed-node-0] 2026-02-20 05:46:29.534690 | orchestrator | 2026-02-20 05:46:29.534702 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 05:46:29.534715 | orchestrator | Friday 20 February 2026 05:45:42 +0000 (0:00:01.112) 0:49:49.705 ******* 2026-02-20 05:46:29.534728 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:46:29.534741 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:46:29.534755 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:46:29.534781 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:46:29.534794 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:46:29.534807 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:46:29.534819 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:46:29.534832 | orchestrator | 2026-02-20 05:46:29.534844 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 05:46:29.534855 | orchestrator | Friday 20 February 2026 05:45:44 +0000 (0:00:02.194) 0:49:51.899 ******* 2026-02-20 05:46:29.534866 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-20 05:46:29.534877 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:46:29.534888 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:46:29.534899 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:46:29.534910 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:46:29.534921 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:46:29.534932 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:46:29.534943 | orchestrator | 2026-02-20 05:46:29.534954 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-02-20 05:46:29.534964 | orchestrator | Friday 20 February 2026 05:45:47 +0000 (0:00:03.029) 0:49:54.929 ******* 2026-02-20 05:46:29.534976 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:46:29.534987 | orchestrator | 2026-02-20 05:46:29.534998 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-02-20 05:46:29.535008 | orchestrator | Friday 20 February 2026 05:45:50 +0000 (0:00:03.291) 0:49:58.221 ******* 2026-02-20 05:46:29.535019 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:46:29.535030 | orchestrator | 2026-02-20 05:46:29.535041 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-02-20 05:46:29.535052 | orchestrator | Friday 20 February 2026 05:45:53 +0000 (0:00:02.945) 0:50:01.166 ******* 2026-02-20 05:46:29.535063 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:46:29.535074 | orchestrator | 2026-02-20 05:46:29.535085 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-02-20 05:46:29.535096 | orchestrator | Friday 20 February 2026 05:45:55 +0000 (0:00:02.157) 0:50:03.323 ******* 2026-02-20 05:46:29.535131 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4720', 'value': {'gid': 4720, 'name': 'testbed-node-5', 'rank': 0, 'incarnation': 4, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.15:6817/186968199', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.15:6816', 'nonce': 186968199}, {'type': 'v1', 'addr': '192.168.16.15:6817', 'nonce': 186968199}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-02-20 05:46:29.535155 | orchestrator | 2026-02-20 05:46:29.535167 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-02-20 05:46:29.535178 | orchestrator | Friday 20 February 2026 05:45:56 +0000 (0:00:01.147) 0:50:04.471 ******* 2026-02-20 05:46:29.535189 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-20 05:46:29.535200 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-20 05:46:29.535211 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-5) 2026-02-20 05:46:29.535222 | orchestrator | 2026-02-20 05:46:29.535233 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-02-20 05:46:29.535287 | orchestrator | Friday 20 February 2026 05:45:58 +0000 (0:00:01.505) 0:50:05.977 ******* 2026-02-20 05:46:29.535298 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-02-20 05:46:29.535309 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-02-20 05:46:29.535320 | orchestrator | 2026-02-20 05:46:29.535331 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-02-20 05:46:29.535342 | orchestrator | Friday 20 February 2026 05:45:59 +0000 (0:00:01.471) 0:50:07.449 ******* 2026-02-20 05:46:29.535353 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:46:29.535364 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:46:29.535376 | orchestrator | 2026-02-20 05:46:29.535386 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-02-20 05:46:29.535397 | orchestrator | Friday 20 February 2026 05:46:11 +0000 (0:00:11.135) 0:50:18.584 ******* 2026-02-20 05:46:29.535411 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:46:29.535431 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:46:29.535450 | orchestrator | 2026-02-20 05:46:29.535478 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-02-20 05:46:29.535497 | orchestrator | Friday 20 February 2026 05:46:14 +0000 (0:00:03.789) 0:50:22.373 ******* 2026-02-20 05:46:29.535515 | orchestrator | ok: [testbed-node-0] 2026-02-20 05:46:29.535527 | orchestrator | 2026-02-20 05:46:29.535537 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-02-20 05:46:29.535548 | orchestrator | Friday 20 February 2026 05:46:17 +0000 (0:00:02.133) 0:50:24.507 ******* 2026-02-20 05:46:29.535559 | orchestrator | changed: [testbed-node-0] 2026-02-20 05:46:29.535570 | orchestrator | 2026-02-20 05:46:29.535581 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-02-20 05:46:29.535592 | orchestrator | 2026-02-20 05:46:29.535603 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:46:29.535614 | orchestrator | Friday 20 February 2026 05:46:18 +0000 (0:00:01.524) 0:50:26.031 ******* 2026-02-20 05:46:29.535625 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-20 05:46:29.535635 | orchestrator | 2026-02-20 05:46:29.535646 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 05:46:29.535657 | orchestrator | Friday 20 February 2026 05:46:19 +0000 (0:00:01.215) 0:50:27.247 ******* 2026-02-20 05:46:29.535668 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:46:29.535679 | orchestrator | 2026-02-20 05:46:29.535690 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 05:46:29.535700 | orchestrator | Friday 20 February 2026 05:46:21 +0000 (0:00:01.447) 0:50:28.695 ******* 2026-02-20 05:46:29.535711 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:46:29.535722 | orchestrator | 2026-02-20 05:46:29.535733 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:46:29.535775 | orchestrator | Friday 20 February 2026 05:46:22 +0000 (0:00:01.142) 0:50:29.837 ******* 2026-02-20 05:46:29.535786 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:46:29.535797 | orchestrator | 2026-02-20 05:46:29.535808 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:46:29.535819 | orchestrator | Friday 20 February 2026 05:46:23 +0000 (0:00:01.471) 0:50:31.309 ******* 2026-02-20 05:46:29.535830 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:46:29.535841 | orchestrator | 2026-02-20 05:46:29.535852 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 05:46:29.535863 | orchestrator | Friday 20 February 2026 05:46:24 +0000 (0:00:01.119) 0:50:32.428 ******* 2026-02-20 05:46:29.535874 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:46:29.535885 | orchestrator | 2026-02-20 05:46:29.535896 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 05:46:29.535907 | orchestrator | Friday 20 February 2026 05:46:26 +0000 (0:00:01.157) 0:50:33.586 ******* 2026-02-20 05:46:29.535918 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:46:29.535929 | orchestrator | 2026-02-20 05:46:29.535940 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 05:46:29.535951 | orchestrator | Friday 20 February 2026 05:46:27 +0000 (0:00:01.129) 0:50:34.715 ******* 2026-02-20 05:46:29.535962 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:46:29.535972 | orchestrator | 2026-02-20 05:46:29.535983 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 05:46:29.535995 | orchestrator | Friday 20 February 2026 05:46:28 +0000 (0:00:01.141) 0:50:35.857 ******* 2026-02-20 05:46:29.536005 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:46:29.536017 | orchestrator | 2026-02-20 05:46:29.536037 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 05:46:54.203167 | orchestrator | Friday 20 February 2026 05:46:29 +0000 (0:00:01.144) 0:50:37.002 ******* 2026-02-20 05:46:54.203401 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:46:54.203421 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:46:54.203433 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:46:54.203445 | orchestrator | 2026-02-20 05:46:54.203458 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 05:46:54.203478 | orchestrator | Friday 20 February 2026 05:46:31 +0000 (0:00:01.994) 0:50:38.997 ******* 2026-02-20 05:46:54.203495 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:46:54.203513 | orchestrator | 2026-02-20 05:46:54.203530 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 05:46:54.203549 | orchestrator | Friday 20 February 2026 05:46:32 +0000 (0:00:01.231) 0:50:40.229 ******* 2026-02-20 05:46:54.203569 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:46:54.203589 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:46:54.203607 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:46:54.203619 | orchestrator | 2026-02-20 05:46:54.203630 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 05:46:54.203641 | orchestrator | Friday 20 February 2026 05:46:36 +0000 (0:00:03.260) 0:50:43.489 ******* 2026-02-20 05:46:54.203652 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-20 05:46:54.203664 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-20 05:46:54.203677 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-20 05:46:54.203689 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:46:54.203702 | orchestrator | 2026-02-20 05:46:54.203715 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 05:46:54.203728 | orchestrator | Friday 20 February 2026 05:46:37 +0000 (0:00:01.674) 0:50:45.164 ******* 2026-02-20 05:46:54.203768 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 05:46:54.203801 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 05:46:54.203814 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 05:46:54.203825 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:46:54.203836 | orchestrator | 2026-02-20 05:46:54.203847 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 05:46:54.203859 | orchestrator | Friday 20 February 2026 05:46:39 +0000 (0:00:01.653) 0:50:46.817 ******* 2026-02-20 05:46:54.203872 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:46:54.203886 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:46:54.203898 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:46:54.203909 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:46:54.203920 | orchestrator | 2026-02-20 05:46:54.203931 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 05:46:54.203942 | orchestrator | Friday 20 February 2026 05:46:40 +0000 (0:00:01.161) 0:50:47.979 ******* 2026-02-20 05:46:54.203976 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'fa7fdf54d9d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 05:46:33.565249', 'end': '2026-02-20 05:46:33.612917', 'delta': '0:00:00.047668', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fa7fdf54d9d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 05:46:54.203991 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '9edb2d04dfb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 05:46:34.146382', 'end': '2026-02-20 05:46:34.191065', 'delta': '0:00:00.044683', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9edb2d04dfb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 05:46:54.204017 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '18841505eff4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 05:46:34.775823', 'end': '2026-02-20 05:46:34.839823', 'delta': '0:00:00.064000', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['18841505eff4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 05:46:54.204030 | orchestrator | 2026-02-20 05:46:54.204041 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 05:46:54.204052 | orchestrator | Friday 20 February 2026 05:46:41 +0000 (0:00:01.182) 0:50:49.162 ******* 2026-02-20 05:46:54.204064 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:46:54.204075 | orchestrator | 2026-02-20 05:46:54.204086 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 05:46:54.204097 | orchestrator | Friday 20 February 2026 05:46:42 +0000 (0:00:01.225) 0:50:50.388 ******* 2026-02-20 05:46:54.204108 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:46:54.204119 | orchestrator | 2026-02-20 05:46:54.204130 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 05:46:54.204141 | orchestrator | Friday 20 February 2026 05:46:44 +0000 (0:00:01.220) 0:50:51.609 ******* 2026-02-20 05:46:54.204152 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:46:54.204163 | orchestrator | 2026-02-20 05:46:54.204174 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 05:46:54.204224 | orchestrator | Friday 20 February 2026 05:46:45 +0000 (0:00:01.127) 0:50:52.736 ******* 2026-02-20 05:46:54.204238 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:46:54.204249 | orchestrator | 2026-02-20 05:46:54.204260 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:46:54.204271 | orchestrator | Friday 20 February 2026 05:46:47 +0000 (0:00:01.969) 0:50:54.705 ******* 2026-02-20 05:46:54.204282 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:46:54.204293 | orchestrator | 2026-02-20 05:46:54.204304 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 05:46:54.204315 | orchestrator | Friday 20 February 2026 05:46:48 +0000 (0:00:01.216) 0:50:55.922 ******* 2026-02-20 05:46:54.204326 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:46:54.204337 | orchestrator | 2026-02-20 05:46:54.204348 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 05:46:54.204359 | orchestrator | Friday 20 February 2026 05:46:49 +0000 (0:00:01.102) 0:50:57.024 ******* 2026-02-20 05:46:54.204370 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:46:54.204381 | orchestrator | 2026-02-20 05:46:54.204392 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:46:54.204404 | orchestrator | Friday 20 February 2026 05:46:50 +0000 (0:00:01.249) 0:50:58.274 ******* 2026-02-20 05:46:54.204415 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:46:54.204426 | orchestrator | 2026-02-20 05:46:54.204436 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 05:46:54.204447 | orchestrator | Friday 20 February 2026 05:46:51 +0000 (0:00:01.121) 0:50:59.396 ******* 2026-02-20 05:46:54.204458 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:46:54.204469 | orchestrator | 2026-02-20 05:46:54.204480 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 05:46:54.204491 | orchestrator | Friday 20 February 2026 05:46:53 +0000 (0:00:01.106) 0:51:00.503 ******* 2026-02-20 05:46:54.204518 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:46:58.910078 | orchestrator | 2026-02-20 05:46:58.911079 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 05:46:58.911123 | orchestrator | Friday 20 February 2026 05:46:54 +0000 (0:00:01.172) 0:51:01.675 ******* 2026-02-20 05:46:58.911134 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:46:58.911144 | orchestrator | 2026-02-20 05:46:58.911154 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 05:46:58.911163 | orchestrator | Friday 20 February 2026 05:46:55 +0000 (0:00:01.108) 0:51:02.784 ******* 2026-02-20 05:46:58.911172 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:46:58.911213 | orchestrator | 2026-02-20 05:46:58.911223 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 05:46:58.911232 | orchestrator | Friday 20 February 2026 05:46:56 +0000 (0:00:01.122) 0:51:03.906 ******* 2026-02-20 05:46:58.911241 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:46:58.911250 | orchestrator | 2026-02-20 05:46:58.911258 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 05:46:58.911268 | orchestrator | Friday 20 February 2026 05:46:57 +0000 (0:00:01.080) 0:51:04.986 ******* 2026-02-20 05:46:58.911278 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:46:58.911287 | orchestrator | 2026-02-20 05:46:58.911296 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 05:46:58.911305 | orchestrator | Friday 20 February 2026 05:46:58 +0000 (0:00:01.164) 0:51:06.151 ******* 2026-02-20 05:46:58.911316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:46:58.911347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2', 'dm-uuid-LVM-M2C8S8UVlhXUKOUHVYtq26o4q1qe3SqLeECzduiKtPWWIbJabB9IOMLcCGA61b8F'], 'uuids': ['81982070-0591-4c7e-bdd5-9c8a78ca773c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '35df89d5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F']}})  2026-02-20 05:46:58.911361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8', 'scsi-SQEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '71e39072', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:46:58.911371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-tuZx2P-aY2U-pmBQ-dfdb-DpNW-3dKP-AWDCQ4', 'scsi-0QEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57', 'scsi-SQEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7c48204c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae']}})  2026-02-20 05:46:58.911400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:46:58.911430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:46:58.911441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 05:46:58.911451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:46:58.911466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b', 'dm-uuid-CRYPT-LUKS2-7f5d4cd4cc71449e82aac2f81f5aced6-nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 05:46:58.911475 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:46:58.911485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae', 'dm-uuid-LVM-5ANCf8I6gapO5QY5OQELwuYZqJbHngb8nyP1EwPXTNj49fiX6eCZwsotaWM2Mv9b'], 'uuids': ['7f5d4cd4-cc71-449e-82aa-c2f81f5aced6'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7c48204c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b']}})  2026-02-20 05:46:58.911494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-4WM9YE-fUvA-MoYX-Wgng-L1JX-iz7g-FbCOv0', 'scsi-0QEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9', 'scsi-SQEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '35df89d5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2']}})  2026-02-20 05:46:58.911517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:47:00.294964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'be990183', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:47:00.295121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:47:00.295143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:47:00.295232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F', 'dm-uuid-CRYPT-LUKS2-8198207005914c7ebdd59c8a78ca773c-eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 05:47:00.295248 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:47:00.295262 | orchestrator | 2026-02-20 05:47:00.295280 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 05:47:00.295300 | orchestrator | Friday 20 February 2026 05:47:00 +0000 (0:00:01.398) 0:51:07.549 ******* 2026-02-20 05:47:00.295397 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:00.295423 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2', 'dm-uuid-LVM-M2C8S8UVlhXUKOUHVYtq26o4q1qe3SqLeECzduiKtPWWIbJabB9IOMLcCGA61b8F'], 'uuids': ['81982070-0591-4c7e-bdd5-9c8a78ca773c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '35df89d5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:00.295481 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8', 'scsi-SQEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '71e39072', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:00.295506 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-tuZx2P-aY2U-pmBQ-dfdb-DpNW-3dKP-AWDCQ4', 'scsi-0QEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57', 'scsi-SQEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7c48204c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:00.295546 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:00.295584 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:01.443755 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:01.443828 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:01.443847 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b', 'dm-uuid-CRYPT-LUKS2-7f5d4cd4cc71449e82aac2f81f5aced6-nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:01.443853 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:01.443873 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae', 'dm-uuid-LVM-5ANCf8I6gapO5QY5OQELwuYZqJbHngb8nyP1EwPXTNj49fiX6eCZwsotaWM2Mv9b'], 'uuids': ['7f5d4cd4-cc71-449e-82aa-c2f81f5aced6'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7c48204c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:01.443891 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-4WM9YE-fUvA-MoYX-Wgng-L1JX-iz7g-FbCOv0', 'scsi-0QEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9', 'scsi-SQEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '35df89d5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:01.443899 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:01.443908 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'be990183', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:01.443918 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:01.443928 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:35.014849 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F', 'dm-uuid-CRYPT-LUKS2-8198207005914c7ebdd59c8a78ca773c-eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:47:35.014971 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:47:35.014989 | orchestrator | 2026-02-20 05:47:35.015001 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 05:47:35.015013 | orchestrator | Friday 20 February 2026 05:47:01 +0000 (0:00:01.370) 0:51:08.920 ******* 2026-02-20 05:47:35.015025 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:47:35.015037 | orchestrator | 2026-02-20 05:47:35.015048 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 05:47:35.015059 | orchestrator | Friday 20 February 2026 05:47:02 +0000 (0:00:01.484) 0:51:10.405 ******* 2026-02-20 05:47:35.015070 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:47:35.015160 | orchestrator | 2026-02-20 05:47:35.015175 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:47:35.015186 | orchestrator | Friday 20 February 2026 05:47:04 +0000 (0:00:01.123) 0:51:11.528 ******* 2026-02-20 05:47:35.015197 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:47:35.015208 | orchestrator | 2026-02-20 05:47:35.015219 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:47:35.015270 | orchestrator | Friday 20 February 2026 05:47:05 +0000 (0:00:01.467) 0:51:12.996 ******* 2026-02-20 05:47:35.015282 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:47:35.015294 | orchestrator | 2026-02-20 05:47:35.015305 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:47:35.015316 | orchestrator | Friday 20 February 2026 05:47:06 +0000 (0:00:01.105) 0:51:14.102 ******* 2026-02-20 05:47:35.015327 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:47:35.015337 | orchestrator | 2026-02-20 05:47:35.015348 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:47:35.015359 | orchestrator | Friday 20 February 2026 05:47:07 +0000 (0:00:01.220) 0:51:15.323 ******* 2026-02-20 05:47:35.015372 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:47:35.015385 | orchestrator | 2026-02-20 05:47:35.015398 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 05:47:35.015410 | orchestrator | Friday 20 February 2026 05:47:08 +0000 (0:00:01.140) 0:51:16.463 ******* 2026-02-20 05:47:35.015424 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-20 05:47:35.015437 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-20 05:47:35.015450 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-20 05:47:35.015463 | orchestrator | 2026-02-20 05:47:35.015475 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 05:47:35.015486 | orchestrator | Friday 20 February 2026 05:47:11 +0000 (0:00:02.037) 0:51:18.500 ******* 2026-02-20 05:47:35.015497 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-20 05:47:35.015508 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-20 05:47:35.015519 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-20 05:47:35.015529 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:47:35.015540 | orchestrator | 2026-02-20 05:47:35.015551 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 05:47:35.015562 | orchestrator | Friday 20 February 2026 05:47:12 +0000 (0:00:01.134) 0:51:19.635 ******* 2026-02-20 05:47:35.015573 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-20 05:47:35.015585 | orchestrator | 2026-02-20 05:47:35.015597 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:47:35.015609 | orchestrator | Friday 20 February 2026 05:47:13 +0000 (0:00:01.117) 0:51:20.753 ******* 2026-02-20 05:47:35.015620 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:47:35.015631 | orchestrator | 2026-02-20 05:47:35.015642 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:47:35.015653 | orchestrator | Friday 20 February 2026 05:47:14 +0000 (0:00:01.120) 0:51:21.874 ******* 2026-02-20 05:47:35.015663 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:47:35.015674 | orchestrator | 2026-02-20 05:47:35.015685 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:47:35.015696 | orchestrator | Friday 20 February 2026 05:47:15 +0000 (0:00:01.127) 0:51:23.001 ******* 2026-02-20 05:47:35.015707 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:47:35.015718 | orchestrator | 2026-02-20 05:47:35.015728 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:47:35.015739 | orchestrator | Friday 20 February 2026 05:47:16 +0000 (0:00:01.120) 0:51:24.121 ******* 2026-02-20 05:47:35.015750 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:47:35.015770 | orchestrator | 2026-02-20 05:47:35.015781 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:47:35.015792 | orchestrator | Friday 20 February 2026 05:47:17 +0000 (0:00:01.212) 0:51:25.334 ******* 2026-02-20 05:47:35.015803 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 05:47:35.015832 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 05:47:35.015844 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 05:47:35.015855 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:47:35.015866 | orchestrator | 2026-02-20 05:47:35.015877 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:47:35.015888 | orchestrator | Friday 20 February 2026 05:47:19 +0000 (0:00:01.374) 0:51:26.709 ******* 2026-02-20 05:47:35.015899 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 05:47:35.015910 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 05:47:35.015920 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 05:47:35.015931 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:47:35.015942 | orchestrator | 2026-02-20 05:47:35.015963 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:47:35.015981 | orchestrator | Friday 20 February 2026 05:47:20 +0000 (0:00:01.391) 0:51:28.101 ******* 2026-02-20 05:47:35.016001 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 05:47:35.016020 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 05:47:35.016039 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 05:47:35.016065 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:47:35.016086 | orchestrator | 2026-02-20 05:47:35.016132 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:47:35.016151 | orchestrator | Friday 20 February 2026 05:47:21 +0000 (0:00:01.378) 0:51:29.480 ******* 2026-02-20 05:47:35.016169 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:47:35.016188 | orchestrator | 2026-02-20 05:47:35.016207 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:47:35.016227 | orchestrator | Friday 20 February 2026 05:47:23 +0000 (0:00:01.139) 0:51:30.619 ******* 2026-02-20 05:47:35.016246 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-20 05:47:35.016265 | orchestrator | 2026-02-20 05:47:35.016283 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 05:47:35.016300 | orchestrator | Friday 20 February 2026 05:47:24 +0000 (0:00:01.342) 0:51:31.961 ******* 2026-02-20 05:47:35.016317 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:47:35.016336 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:47:35.016352 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:47:35.016371 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:47:35.016390 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:47:35.016409 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-20 05:47:35.016429 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:47:35.016450 | orchestrator | 2026-02-20 05:47:35.016467 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 05:47:35.016484 | orchestrator | Friday 20 February 2026 05:47:26 +0000 (0:00:02.068) 0:51:34.030 ******* 2026-02-20 05:47:35.016502 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:47:35.016521 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:47:35.016539 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:47:35.016559 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:47:35.016608 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:47:35.016632 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-20 05:47:35.016644 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:47:35.016654 | orchestrator | 2026-02-20 05:47:35.016665 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-02-20 05:47:35.016682 | orchestrator | Friday 20 February 2026 05:47:29 +0000 (0:00:02.811) 0:51:36.841 ******* 2026-02-20 05:47:35.016700 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:47:35.016718 | orchestrator | 2026-02-20 05:47:35.016737 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 05:47:35.016754 | orchestrator | Friday 20 February 2026 05:47:30 +0000 (0:00:01.076) 0:51:37.918 ******* 2026-02-20 05:47:35.016774 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-20 05:47:35.016793 | orchestrator | 2026-02-20 05:47:35.016812 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 05:47:35.016823 | orchestrator | Friday 20 February 2026 05:47:31 +0000 (0:00:00.900) 0:51:38.819 ******* 2026-02-20 05:47:35.016834 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-20 05:47:35.016845 | orchestrator | 2026-02-20 05:47:35.016856 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 05:47:35.016866 | orchestrator | Friday 20 February 2026 05:47:32 +0000 (0:00:01.093) 0:51:39.912 ******* 2026-02-20 05:47:35.016877 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:47:35.016888 | orchestrator | 2026-02-20 05:47:35.016899 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 05:47:35.016910 | orchestrator | Friday 20 February 2026 05:47:33 +0000 (0:00:01.087) 0:51:41.000 ******* 2026-02-20 05:47:35.016921 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:47:35.016932 | orchestrator | 2026-02-20 05:47:35.016942 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 05:47:35.016964 | orchestrator | Friday 20 February 2026 05:47:35 +0000 (0:00:01.489) 0:51:42.489 ******* 2026-02-20 05:48:25.005195 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:48:25.005294 | orchestrator | 2026-02-20 05:48:25.005307 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 05:48:25.005316 | orchestrator | Friday 20 February 2026 05:47:36 +0000 (0:00:01.509) 0:51:43.999 ******* 2026-02-20 05:48:25.005324 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:48:25.005332 | orchestrator | 2026-02-20 05:48:25.005340 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 05:48:25.005347 | orchestrator | Friday 20 February 2026 05:47:38 +0000 (0:00:01.528) 0:51:45.528 ******* 2026-02-20 05:48:25.005355 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.005363 | orchestrator | 2026-02-20 05:48:25.005371 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 05:48:25.005378 | orchestrator | Friday 20 February 2026 05:47:39 +0000 (0:00:01.117) 0:51:46.645 ******* 2026-02-20 05:48:25.005386 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.005394 | orchestrator | 2026-02-20 05:48:25.005401 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 05:48:25.005408 | orchestrator | Friday 20 February 2026 05:47:40 +0000 (0:00:01.102) 0:51:47.748 ******* 2026-02-20 05:48:25.005416 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.005423 | orchestrator | 2026-02-20 05:48:25.005443 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 05:48:25.005451 | orchestrator | Friday 20 February 2026 05:47:41 +0000 (0:00:01.185) 0:51:48.934 ******* 2026-02-20 05:48:25.005459 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:48:25.005466 | orchestrator | 2026-02-20 05:48:25.005474 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 05:48:25.005499 | orchestrator | Friday 20 February 2026 05:47:43 +0000 (0:00:01.570) 0:51:50.504 ******* 2026-02-20 05:48:25.005507 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:48:25.005514 | orchestrator | 2026-02-20 05:48:25.005522 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 05:48:25.005529 | orchestrator | Friday 20 February 2026 05:47:44 +0000 (0:00:01.517) 0:51:52.022 ******* 2026-02-20 05:48:25.005536 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.005544 | orchestrator | 2026-02-20 05:48:25.005551 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 05:48:25.005558 | orchestrator | Friday 20 February 2026 05:47:45 +0000 (0:00:01.135) 0:51:53.158 ******* 2026-02-20 05:48:25.005566 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.005573 | orchestrator | 2026-02-20 05:48:25.005580 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 05:48:25.005588 | orchestrator | Friday 20 February 2026 05:47:46 +0000 (0:00:01.105) 0:51:54.263 ******* 2026-02-20 05:48:25.005595 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:48:25.005602 | orchestrator | 2026-02-20 05:48:25.005609 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 05:48:25.005617 | orchestrator | Friday 20 February 2026 05:47:47 +0000 (0:00:01.126) 0:51:55.390 ******* 2026-02-20 05:48:25.005624 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:48:25.005631 | orchestrator | 2026-02-20 05:48:25.005638 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 05:48:25.005646 | orchestrator | Friday 20 February 2026 05:47:49 +0000 (0:00:01.161) 0:51:56.551 ******* 2026-02-20 05:48:25.005653 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:48:25.005660 | orchestrator | 2026-02-20 05:48:25.005668 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 05:48:25.005675 | orchestrator | Friday 20 February 2026 05:47:50 +0000 (0:00:01.141) 0:51:57.693 ******* 2026-02-20 05:48:25.005682 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.005690 | orchestrator | 2026-02-20 05:48:25.005697 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 05:48:25.005705 | orchestrator | Friday 20 February 2026 05:47:51 +0000 (0:00:01.159) 0:51:58.853 ******* 2026-02-20 05:48:25.005712 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.005719 | orchestrator | 2026-02-20 05:48:25.005726 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 05:48:25.005734 | orchestrator | Friday 20 February 2026 05:47:52 +0000 (0:00:01.114) 0:51:59.967 ******* 2026-02-20 05:48:25.005741 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.005750 | orchestrator | 2026-02-20 05:48:25.005758 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 05:48:25.005767 | orchestrator | Friday 20 February 2026 05:47:53 +0000 (0:00:01.158) 0:52:01.125 ******* 2026-02-20 05:48:25.005775 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:48:25.005786 | orchestrator | 2026-02-20 05:48:25.005799 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 05:48:25.005818 | orchestrator | Friday 20 February 2026 05:47:54 +0000 (0:00:01.145) 0:52:02.271 ******* 2026-02-20 05:48:25.005832 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:48:25.005844 | orchestrator | 2026-02-20 05:48:25.005856 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 05:48:25.005868 | orchestrator | Friday 20 February 2026 05:47:56 +0000 (0:00:01.281) 0:52:03.553 ******* 2026-02-20 05:48:25.005879 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.005891 | orchestrator | 2026-02-20 05:48:25.005903 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 05:48:25.005915 | orchestrator | Friday 20 February 2026 05:47:57 +0000 (0:00:01.111) 0:52:04.665 ******* 2026-02-20 05:48:25.005928 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.005944 | orchestrator | 2026-02-20 05:48:25.005959 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 05:48:25.005984 | orchestrator | Friday 20 February 2026 05:47:58 +0000 (0:00:01.124) 0:52:05.789 ******* 2026-02-20 05:48:25.005998 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.006121 | orchestrator | 2026-02-20 05:48:25.006131 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 05:48:25.006138 | orchestrator | Friday 20 February 2026 05:47:59 +0000 (0:00:01.111) 0:52:06.901 ******* 2026-02-20 05:48:25.006146 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.006191 | orchestrator | 2026-02-20 05:48:25.006200 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 05:48:25.006223 | orchestrator | Friday 20 February 2026 05:48:00 +0000 (0:00:01.151) 0:52:08.052 ******* 2026-02-20 05:48:25.006231 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.006239 | orchestrator | 2026-02-20 05:48:25.006246 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 05:48:25.006254 | orchestrator | Friday 20 February 2026 05:48:01 +0000 (0:00:01.104) 0:52:09.156 ******* 2026-02-20 05:48:25.006261 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.006269 | orchestrator | 2026-02-20 05:48:25.006276 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 05:48:25.006284 | orchestrator | Friday 20 February 2026 05:48:02 +0000 (0:00:01.103) 0:52:10.259 ******* 2026-02-20 05:48:25.006291 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.006299 | orchestrator | 2026-02-20 05:48:25.006306 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 05:48:25.006314 | orchestrator | Friday 20 February 2026 05:48:03 +0000 (0:00:01.085) 0:52:11.345 ******* 2026-02-20 05:48:25.006322 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.006329 | orchestrator | 2026-02-20 05:48:25.006337 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 05:48:25.006351 | orchestrator | Friday 20 February 2026 05:48:04 +0000 (0:00:01.100) 0:52:12.445 ******* 2026-02-20 05:48:25.006359 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.006366 | orchestrator | 2026-02-20 05:48:25.006373 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 05:48:25.006381 | orchestrator | Friday 20 February 2026 05:48:06 +0000 (0:00:01.129) 0:52:13.575 ******* 2026-02-20 05:48:25.006389 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.006397 | orchestrator | 2026-02-20 05:48:25.006405 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 05:48:25.006414 | orchestrator | Friday 20 February 2026 05:48:07 +0000 (0:00:01.116) 0:52:14.692 ******* 2026-02-20 05:48:25.006421 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.006429 | orchestrator | 2026-02-20 05:48:25.006436 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 05:48:25.006444 | orchestrator | Friday 20 February 2026 05:48:08 +0000 (0:00:01.143) 0:52:15.835 ******* 2026-02-20 05:48:25.006451 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.006458 | orchestrator | 2026-02-20 05:48:25.006466 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 05:48:25.006473 | orchestrator | Friday 20 February 2026 05:48:09 +0000 (0:00:01.216) 0:52:17.052 ******* 2026-02-20 05:48:25.006480 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:48:25.006488 | orchestrator | 2026-02-20 05:48:25.006495 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 05:48:25.006502 | orchestrator | Friday 20 February 2026 05:48:11 +0000 (0:00:02.070) 0:52:19.123 ******* 2026-02-20 05:48:25.006510 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:48:25.006517 | orchestrator | 2026-02-20 05:48:25.006525 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 05:48:25.006532 | orchestrator | Friday 20 February 2026 05:48:13 +0000 (0:00:02.238) 0:52:21.362 ******* 2026-02-20 05:48:25.006540 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-20 05:48:25.006555 | orchestrator | 2026-02-20 05:48:25.006563 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-20 05:48:25.006570 | orchestrator | Friday 20 February 2026 05:48:14 +0000 (0:00:01.121) 0:52:22.484 ******* 2026-02-20 05:48:25.006579 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.006586 | orchestrator | 2026-02-20 05:48:25.006594 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-20 05:48:25.006602 | orchestrator | Friday 20 February 2026 05:48:16 +0000 (0:00:01.109) 0:52:23.594 ******* 2026-02-20 05:48:25.006610 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.006617 | orchestrator | 2026-02-20 05:48:25.006625 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-20 05:48:25.006633 | orchestrator | Friday 20 February 2026 05:48:17 +0000 (0:00:01.120) 0:52:24.714 ******* 2026-02-20 05:48:25.006640 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 05:48:25.006647 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 05:48:25.006655 | orchestrator | 2026-02-20 05:48:25.006663 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-20 05:48:25.006671 | orchestrator | Friday 20 February 2026 05:48:19 +0000 (0:00:01.818) 0:52:26.533 ******* 2026-02-20 05:48:25.006679 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:48:25.006686 | orchestrator | 2026-02-20 05:48:25.006695 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-20 05:48:25.006703 | orchestrator | Friday 20 February 2026 05:48:20 +0000 (0:00:01.480) 0:52:28.014 ******* 2026-02-20 05:48:25.006711 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.006719 | orchestrator | 2026-02-20 05:48:25.006726 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-20 05:48:25.006733 | orchestrator | Friday 20 February 2026 05:48:21 +0000 (0:00:01.116) 0:52:29.130 ******* 2026-02-20 05:48:25.006741 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.006748 | orchestrator | 2026-02-20 05:48:25.006755 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 05:48:25.006763 | orchestrator | Friday 20 February 2026 05:48:22 +0000 (0:00:01.149) 0:52:30.280 ******* 2026-02-20 05:48:25.006770 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:48:25.006778 | orchestrator | 2026-02-20 05:48:25.006785 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 05:48:25.006792 | orchestrator | Friday 20 February 2026 05:48:23 +0000 (0:00:01.099) 0:52:31.379 ******* 2026-02-20 05:48:25.006799 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-20 05:48:25.006807 | orchestrator | 2026-02-20 05:48:25.006814 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-20 05:48:25.006827 | orchestrator | Friday 20 February 2026 05:48:24 +0000 (0:00:01.095) 0:52:32.475 ******* 2026-02-20 05:49:11.578743 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:49:11.578821 | orchestrator | 2026-02-20 05:49:11.578828 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-20 05:49:11.578835 | orchestrator | Friday 20 February 2026 05:48:26 +0000 (0:00:01.826) 0:52:34.302 ******* 2026-02-20 05:49:11.578842 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 05:49:11.578847 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 05:49:11.578852 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 05:49:11.578857 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.578862 | orchestrator | 2026-02-20 05:49:11.578867 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-20 05:49:11.578872 | orchestrator | Friday 20 February 2026 05:48:27 +0000 (0:00:01.109) 0:52:35.411 ******* 2026-02-20 05:49:11.578877 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.578881 | orchestrator | 2026-02-20 05:49:11.578886 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-20 05:49:11.578957 | orchestrator | Friday 20 February 2026 05:48:29 +0000 (0:00:01.129) 0:52:36.541 ******* 2026-02-20 05:49:11.578964 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.578969 | orchestrator | 2026-02-20 05:49:11.578974 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-20 05:49:11.578978 | orchestrator | Friday 20 February 2026 05:48:30 +0000 (0:00:01.200) 0:52:37.742 ******* 2026-02-20 05:49:11.578983 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.578988 | orchestrator | 2026-02-20 05:49:11.578992 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-20 05:49:11.578997 | orchestrator | Friday 20 February 2026 05:48:31 +0000 (0:00:01.135) 0:52:38.877 ******* 2026-02-20 05:49:11.579001 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579006 | orchestrator | 2026-02-20 05:49:11.579011 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-20 05:49:11.579015 | orchestrator | Friday 20 February 2026 05:48:32 +0000 (0:00:01.128) 0:52:40.005 ******* 2026-02-20 05:49:11.579020 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579025 | orchestrator | 2026-02-20 05:49:11.579029 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 05:49:11.579034 | orchestrator | Friday 20 February 2026 05:48:33 +0000 (0:00:01.142) 0:52:41.148 ******* 2026-02-20 05:49:11.579038 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:49:11.579043 | orchestrator | 2026-02-20 05:49:11.579048 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 05:49:11.579052 | orchestrator | Friday 20 February 2026 05:48:36 +0000 (0:00:02.572) 0:52:43.720 ******* 2026-02-20 05:49:11.579057 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:49:11.579062 | orchestrator | 2026-02-20 05:49:11.579066 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 05:49:11.579071 | orchestrator | Friday 20 February 2026 05:48:37 +0000 (0:00:01.121) 0:52:44.842 ******* 2026-02-20 05:49:11.579076 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-20 05:49:11.579081 | orchestrator | 2026-02-20 05:49:11.579085 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-20 05:49:11.579090 | orchestrator | Friday 20 February 2026 05:48:38 +0000 (0:00:01.095) 0:52:45.937 ******* 2026-02-20 05:49:11.579094 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579099 | orchestrator | 2026-02-20 05:49:11.579104 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-20 05:49:11.579108 | orchestrator | Friday 20 February 2026 05:48:39 +0000 (0:00:01.132) 0:52:47.070 ******* 2026-02-20 05:49:11.579113 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579118 | orchestrator | 2026-02-20 05:49:11.579122 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-20 05:49:11.579127 | orchestrator | Friday 20 February 2026 05:48:40 +0000 (0:00:01.115) 0:52:48.185 ******* 2026-02-20 05:49:11.579132 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579136 | orchestrator | 2026-02-20 05:49:11.579141 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-20 05:49:11.579145 | orchestrator | Friday 20 February 2026 05:48:41 +0000 (0:00:01.181) 0:52:49.367 ******* 2026-02-20 05:49:11.579150 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579155 | orchestrator | 2026-02-20 05:49:11.579160 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-20 05:49:11.579164 | orchestrator | Friday 20 February 2026 05:48:43 +0000 (0:00:01.138) 0:52:50.505 ******* 2026-02-20 05:49:11.579169 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579174 | orchestrator | 2026-02-20 05:49:11.579178 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-20 05:49:11.579183 | orchestrator | Friday 20 February 2026 05:48:44 +0000 (0:00:01.140) 0:52:51.645 ******* 2026-02-20 05:49:11.579188 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579198 | orchestrator | 2026-02-20 05:49:11.579203 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-20 05:49:11.579207 | orchestrator | Friday 20 February 2026 05:48:45 +0000 (0:00:01.147) 0:52:52.793 ******* 2026-02-20 05:49:11.579212 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579216 | orchestrator | 2026-02-20 05:49:11.579221 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-20 05:49:11.579226 | orchestrator | Friday 20 February 2026 05:48:46 +0000 (0:00:01.118) 0:52:53.911 ******* 2026-02-20 05:49:11.579230 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579235 | orchestrator | 2026-02-20 05:49:11.579239 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-20 05:49:11.579244 | orchestrator | Friday 20 February 2026 05:48:47 +0000 (0:00:01.118) 0:52:55.030 ******* 2026-02-20 05:49:11.579249 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:49:11.579253 | orchestrator | 2026-02-20 05:49:11.579258 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 05:49:11.579272 | orchestrator | Friday 20 February 2026 05:48:48 +0000 (0:00:01.136) 0:52:56.167 ******* 2026-02-20 05:49:11.579277 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-20 05:49:11.579283 | orchestrator | 2026-02-20 05:49:11.579287 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-20 05:49:11.579292 | orchestrator | Friday 20 February 2026 05:48:49 +0000 (0:00:01.100) 0:52:57.267 ******* 2026-02-20 05:49:11.579297 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-20 05:49:11.579302 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-20 05:49:11.579306 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-20 05:49:11.579311 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-20 05:49:11.579317 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-20 05:49:11.579322 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-20 05:49:11.579327 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-20 05:49:11.579333 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-20 05:49:11.579341 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 05:49:11.579347 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 05:49:11.579352 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 05:49:11.579357 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 05:49:11.579363 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 05:49:11.579368 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 05:49:11.579373 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-20 05:49:11.579379 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-20 05:49:11.579384 | orchestrator | 2026-02-20 05:49:11.579390 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 05:49:11.579395 | orchestrator | Friday 20 February 2026 05:48:56 +0000 (0:00:06.828) 0:53:04.095 ******* 2026-02-20 05:49:11.579400 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-20 05:49:11.579406 | orchestrator | 2026-02-20 05:49:11.579411 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-20 05:49:11.579417 | orchestrator | Friday 20 February 2026 05:48:57 +0000 (0:00:01.099) 0:53:05.194 ******* 2026-02-20 05:49:11.579421 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 05:49:11.579427 | orchestrator | 2026-02-20 05:49:11.579432 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-20 05:49:11.579436 | orchestrator | Friday 20 February 2026 05:48:59 +0000 (0:00:01.570) 0:53:06.765 ******* 2026-02-20 05:49:11.579445 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 05:49:11.579450 | orchestrator | 2026-02-20 05:49:11.579454 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 05:49:11.579459 | orchestrator | Friday 20 February 2026 05:49:01 +0000 (0:00:02.051) 0:53:08.816 ******* 2026-02-20 05:49:11.579463 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579468 | orchestrator | 2026-02-20 05:49:11.579473 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 05:49:11.579477 | orchestrator | Friday 20 February 2026 05:49:02 +0000 (0:00:01.159) 0:53:09.975 ******* 2026-02-20 05:49:11.579482 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579486 | orchestrator | 2026-02-20 05:49:11.579491 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 05:49:11.579495 | orchestrator | Friday 20 February 2026 05:49:03 +0000 (0:00:01.105) 0:53:11.081 ******* 2026-02-20 05:49:11.579500 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579505 | orchestrator | 2026-02-20 05:49:11.579509 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 05:49:11.579514 | orchestrator | Friday 20 February 2026 05:49:04 +0000 (0:00:01.149) 0:53:12.230 ******* 2026-02-20 05:49:11.579518 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579523 | orchestrator | 2026-02-20 05:49:11.579528 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 05:49:11.579532 | orchestrator | Friday 20 February 2026 05:49:05 +0000 (0:00:01.171) 0:53:13.402 ******* 2026-02-20 05:49:11.579537 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579541 | orchestrator | 2026-02-20 05:49:11.579546 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 05:49:11.579551 | orchestrator | Friday 20 February 2026 05:49:07 +0000 (0:00:01.129) 0:53:14.532 ******* 2026-02-20 05:49:11.579555 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579560 | orchestrator | 2026-02-20 05:49:11.579564 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 05:49:11.579569 | orchestrator | Friday 20 February 2026 05:49:08 +0000 (0:00:01.120) 0:53:15.652 ******* 2026-02-20 05:49:11.579574 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579578 | orchestrator | 2026-02-20 05:49:11.579583 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 05:49:11.579587 | orchestrator | Friday 20 February 2026 05:49:09 +0000 (0:00:01.139) 0:53:16.791 ******* 2026-02-20 05:49:11.579592 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579597 | orchestrator | 2026-02-20 05:49:11.579601 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 05:49:11.579606 | orchestrator | Friday 20 February 2026 05:49:10 +0000 (0:00:01.139) 0:53:17.931 ******* 2026-02-20 05:49:11.579610 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:49:11.579615 | orchestrator | 2026-02-20 05:49:11.579623 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 05:50:08.242742 | orchestrator | Friday 20 February 2026 05:49:11 +0000 (0:00:01.121) 0:53:19.053 ******* 2026-02-20 05:50:08.242987 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:50:08.243015 | orchestrator | 2026-02-20 05:50:08.243035 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 05:50:08.243052 | orchestrator | Friday 20 February 2026 05:49:12 +0000 (0:00:01.165) 0:53:20.219 ******* 2026-02-20 05:50:08.243070 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:50:08.243087 | orchestrator | 2026-02-20 05:50:08.243103 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 05:50:08.243120 | orchestrator | Friday 20 February 2026 05:49:13 +0000 (0:00:01.133) 0:53:21.353 ******* 2026-02-20 05:50:08.243138 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-20 05:50:08.243190 | orchestrator | 2026-02-20 05:50:08.243211 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 05:50:08.243228 | orchestrator | Friday 20 February 2026 05:49:19 +0000 (0:00:05.162) 0:53:26.515 ******* 2026-02-20 05:50:08.243266 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 05:50:08.243288 | orchestrator | 2026-02-20 05:50:08.243307 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 05:50:08.243325 | orchestrator | Friday 20 February 2026 05:49:20 +0000 (0:00:01.178) 0:53:27.693 ******* 2026-02-20 05:50:08.243348 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-20 05:50:08.243372 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-20 05:50:08.243388 | orchestrator | 2026-02-20 05:50:08.243399 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 05:50:08.243410 | orchestrator | Friday 20 February 2026 05:49:25 +0000 (0:00:05.218) 0:53:32.912 ******* 2026-02-20 05:50:08.243422 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:50:08.243434 | orchestrator | 2026-02-20 05:50:08.243445 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 05:50:08.243456 | orchestrator | Friday 20 February 2026 05:49:26 +0000 (0:00:01.156) 0:53:34.068 ******* 2026-02-20 05:50:08.243467 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:50:08.243478 | orchestrator | 2026-02-20 05:50:08.243490 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:50:08.243501 | orchestrator | Friday 20 February 2026 05:49:27 +0000 (0:00:01.112) 0:53:35.181 ******* 2026-02-20 05:50:08.243512 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:50:08.243524 | orchestrator | 2026-02-20 05:50:08.243535 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:50:08.243546 | orchestrator | Friday 20 February 2026 05:49:28 +0000 (0:00:01.141) 0:53:36.323 ******* 2026-02-20 05:50:08.243557 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:50:08.243568 | orchestrator | 2026-02-20 05:50:08.243578 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:50:08.243590 | orchestrator | Friday 20 February 2026 05:49:29 +0000 (0:00:01.125) 0:53:37.448 ******* 2026-02-20 05:50:08.243600 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:50:08.243611 | orchestrator | 2026-02-20 05:50:08.243622 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:50:08.243633 | orchestrator | Friday 20 February 2026 05:49:31 +0000 (0:00:01.155) 0:53:38.604 ******* 2026-02-20 05:50:08.243644 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:50:08.243656 | orchestrator | 2026-02-20 05:50:08.243667 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:50:08.243678 | orchestrator | Friday 20 February 2026 05:49:32 +0000 (0:00:01.243) 0:53:39.847 ******* 2026-02-20 05:50:08.243689 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 05:50:08.243701 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 05:50:08.243712 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 05:50:08.243723 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:50:08.243734 | orchestrator | 2026-02-20 05:50:08.243745 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:50:08.243766 | orchestrator | Friday 20 February 2026 05:49:33 +0000 (0:00:01.402) 0:53:41.251 ******* 2026-02-20 05:50:08.243777 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 05:50:08.243788 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 05:50:08.243799 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 05:50:08.243810 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:50:08.243858 | orchestrator | 2026-02-20 05:50:08.243876 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:50:08.243896 | orchestrator | Friday 20 February 2026 05:49:35 +0000 (0:00:01.390) 0:53:42.641 ******* 2026-02-20 05:50:08.243916 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 05:50:08.243935 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 05:50:08.243949 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 05:50:08.243985 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:50:08.244004 | orchestrator | 2026-02-20 05:50:08.244022 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:50:08.244041 | orchestrator | Friday 20 February 2026 05:49:36 +0000 (0:00:01.416) 0:53:44.058 ******* 2026-02-20 05:50:08.244060 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:50:08.244078 | orchestrator | 2026-02-20 05:50:08.244098 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:50:08.244117 | orchestrator | Friday 20 February 2026 05:49:37 +0000 (0:00:01.126) 0:53:45.184 ******* 2026-02-20 05:50:08.244136 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-20 05:50:08.244149 | orchestrator | 2026-02-20 05:50:08.244169 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 05:50:08.244187 | orchestrator | Friday 20 February 2026 05:49:39 +0000 (0:00:01.784) 0:53:46.969 ******* 2026-02-20 05:50:08.244205 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:50:08.244222 | orchestrator | 2026-02-20 05:50:08.244238 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-20 05:50:08.244265 | orchestrator | Friday 20 February 2026 05:49:41 +0000 (0:00:01.784) 0:53:48.753 ******* 2026-02-20 05:50:08.244287 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:50:08.244305 | orchestrator | 2026-02-20 05:50:08.244323 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-20 05:50:08.244339 | orchestrator | Friday 20 February 2026 05:49:42 +0000 (0:00:01.131) 0:53:49.884 ******* 2026-02-20 05:50:08.244356 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-5 2026-02-20 05:50:08.244376 | orchestrator | 2026-02-20 05:50:08.244394 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-20 05:50:08.244413 | orchestrator | Friday 20 February 2026 05:49:43 +0000 (0:00:01.484) 0:53:51.370 ******* 2026-02-20 05:50:08.244432 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-20 05:50:08.244451 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-20 05:50:08.244471 | orchestrator | 2026-02-20 05:50:08.244483 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-20 05:50:08.244494 | orchestrator | Friday 20 February 2026 05:49:45 +0000 (0:00:01.866) 0:53:53.236 ******* 2026-02-20 05:50:08.244505 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 05:50:08.244516 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-20 05:50:08.244527 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 05:50:08.244538 | orchestrator | 2026-02-20 05:50:08.244549 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-20 05:50:08.244560 | orchestrator | Friday 20 February 2026 05:49:49 +0000 (0:00:03.404) 0:53:56.640 ******* 2026-02-20 05:50:08.244571 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-20 05:50:08.244582 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-20 05:50:08.244604 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:50:08.244615 | orchestrator | 2026-02-20 05:50:08.244626 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-20 05:50:08.244637 | orchestrator | Friday 20 February 2026 05:49:51 +0000 (0:00:02.008) 0:53:58.649 ******* 2026-02-20 05:50:08.244648 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:50:08.244659 | orchestrator | 2026-02-20 05:50:08.244670 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-20 05:50:08.244681 | orchestrator | Friday 20 February 2026 05:49:52 +0000 (0:00:01.536) 0:54:00.186 ******* 2026-02-20 05:50:08.244692 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:50:08.244703 | orchestrator | 2026-02-20 05:50:08.244714 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-20 05:50:08.244725 | orchestrator | Friday 20 February 2026 05:49:53 +0000 (0:00:01.099) 0:54:01.286 ******* 2026-02-20 05:50:08.244736 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5 2026-02-20 05:50:08.244748 | orchestrator | 2026-02-20 05:50:08.244759 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-20 05:50:08.244770 | orchestrator | Friday 20 February 2026 05:49:55 +0000 (0:00:01.467) 0:54:02.754 ******* 2026-02-20 05:50:08.244781 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-5 2026-02-20 05:50:08.244792 | orchestrator | 2026-02-20 05:50:08.244803 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-20 05:50:08.244854 | orchestrator | Friday 20 February 2026 05:49:56 +0000 (0:00:01.619) 0:54:04.373 ******* 2026-02-20 05:50:08.244868 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:50:08.244879 | orchestrator | 2026-02-20 05:50:08.244890 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-20 05:50:08.244901 | orchestrator | Friday 20 February 2026 05:49:58 +0000 (0:00:02.083) 0:54:06.457 ******* 2026-02-20 05:50:08.244915 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:50:08.244934 | orchestrator | 2026-02-20 05:50:08.244962 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-20 05:50:08.244983 | orchestrator | Friday 20 February 2026 05:50:00 +0000 (0:00:01.915) 0:54:08.373 ******* 2026-02-20 05:50:08.245000 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:50:08.245018 | orchestrator | 2026-02-20 05:50:08.245034 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-20 05:50:08.245052 | orchestrator | Friday 20 February 2026 05:50:03 +0000 (0:00:02.289) 0:54:10.662 ******* 2026-02-20 05:50:08.245070 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:50:08.245088 | orchestrator | 2026-02-20 05:50:08.245106 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-20 05:50:08.245126 | orchestrator | Friday 20 February 2026 05:50:05 +0000 (0:00:02.316) 0:54:12.978 ******* 2026-02-20 05:50:08.245145 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:50:08.245164 | orchestrator | 2026-02-20 05:50:08.245184 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-02-20 05:50:08.245199 | orchestrator | Friday 20 February 2026 05:50:07 +0000 (0:00:01.608) 0:54:14.587 ******* 2026-02-20 05:50:08.245225 | orchestrator | skipping: [testbed-node-5] 2026-02-20 05:50:40.415978 | orchestrator | 2026-02-20 05:50:40.416103 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-02-20 05:50:40.416123 | orchestrator | Friday 20 February 2026 05:50:08 +0000 (0:00:01.130) 0:54:15.717 ******* 2026-02-20 05:50:40.416136 | orchestrator | ok: [testbed-node-5] 2026-02-20 05:50:40.416149 | orchestrator | 2026-02-20 05:50:40.416160 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-02-20 05:50:40.416171 | orchestrator | 2026-02-20 05:50:40.416183 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:50:40.416194 | orchestrator | Friday 20 February 2026 05:50:16 +0000 (0:00:08.450) 0:54:24.168 ******* 2026-02-20 05:50:40.416205 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4, testbed-node-3 2026-02-20 05:50:40.416242 | orchestrator | 2026-02-20 05:50:40.416254 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 05:50:40.416266 | orchestrator | Friday 20 February 2026 05:50:18 +0000 (0:00:01.466) 0:54:25.634 ******* 2026-02-20 05:50:40.416277 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:50:40.416303 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:50:40.416315 | orchestrator | 2026-02-20 05:50:40.416326 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 05:50:40.416338 | orchestrator | Friday 20 February 2026 05:50:19 +0000 (0:00:01.565) 0:54:27.199 ******* 2026-02-20 05:50:40.416349 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:50:40.416360 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:50:40.416371 | orchestrator | 2026-02-20 05:50:40.416382 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:50:40.416393 | orchestrator | Friday 20 February 2026 05:50:20 +0000 (0:00:01.262) 0:54:28.462 ******* 2026-02-20 05:50:40.416404 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:50:40.416415 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:50:40.416426 | orchestrator | 2026-02-20 05:50:40.416437 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:50:40.416448 | orchestrator | Friday 20 February 2026 05:50:22 +0000 (0:00:01.493) 0:54:29.956 ******* 2026-02-20 05:50:40.416459 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:50:40.416470 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:50:40.416481 | orchestrator | 2026-02-20 05:50:40.416492 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 05:50:40.416506 | orchestrator | Friday 20 February 2026 05:50:23 +0000 (0:00:01.237) 0:54:31.193 ******* 2026-02-20 05:50:40.416518 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:50:40.416530 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:50:40.416543 | orchestrator | 2026-02-20 05:50:40.416556 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 05:50:40.416569 | orchestrator | Friday 20 February 2026 05:50:24 +0000 (0:00:01.180) 0:54:32.374 ******* 2026-02-20 05:50:40.416581 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:50:40.416593 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:50:40.416604 | orchestrator | 2026-02-20 05:50:40.416615 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 05:50:40.416626 | orchestrator | Friday 20 February 2026 05:50:26 +0000 (0:00:01.393) 0:54:33.768 ******* 2026-02-20 05:50:40.416637 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:50:40.416649 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:50:40.416660 | orchestrator | 2026-02-20 05:50:40.416671 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 05:50:40.416682 | orchestrator | Friday 20 February 2026 05:50:27 +0000 (0:00:01.209) 0:54:34.977 ******* 2026-02-20 05:50:40.416693 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:50:40.416705 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:50:40.416716 | orchestrator | 2026-02-20 05:50:40.416727 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 05:50:40.416738 | orchestrator | Friday 20 February 2026 05:50:28 +0000 (0:00:01.207) 0:54:36.185 ******* 2026-02-20 05:50:40.416749 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:50:40.416793 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:50:40.416806 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:50:40.416817 | orchestrator | 2026-02-20 05:50:40.416828 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 05:50:40.416851 | orchestrator | Friday 20 February 2026 05:50:30 +0000 (0:00:01.614) 0:54:37.799 ******* 2026-02-20 05:50:40.416863 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:50:40.416874 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:50:40.416885 | orchestrator | 2026-02-20 05:50:40.416896 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 05:50:40.416917 | orchestrator | Friday 20 February 2026 05:50:31 +0000 (0:00:01.276) 0:54:39.075 ******* 2026-02-20 05:50:40.416928 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:50:40.416940 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:50:40.416950 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:50:40.416962 | orchestrator | 2026-02-20 05:50:40.416973 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 05:50:40.416984 | orchestrator | Friday 20 February 2026 05:50:34 +0000 (0:00:03.147) 0:54:42.223 ******* 2026-02-20 05:50:40.416995 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-20 05:50:40.417007 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-20 05:50:40.417018 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-20 05:50:40.417029 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:50:40.417039 | orchestrator | 2026-02-20 05:50:40.417051 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 05:50:40.417062 | orchestrator | Friday 20 February 2026 05:50:36 +0000 (0:00:01.380) 0:54:43.603 ******* 2026-02-20 05:50:40.417093 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 05:50:40.417109 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 05:50:40.417120 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 05:50:40.417132 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:50:40.417143 | orchestrator | 2026-02-20 05:50:40.417160 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 05:50:40.417172 | orchestrator | Friday 20 February 2026 05:50:38 +0000 (0:00:01.910) 0:54:45.514 ******* 2026-02-20 05:50:40.417185 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:50:40.417200 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:50:40.417211 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:50:40.417222 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:50:40.417234 | orchestrator | 2026-02-20 05:50:40.417245 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 05:50:40.417256 | orchestrator | Friday 20 February 2026 05:50:39 +0000 (0:00:01.140) 0:54:46.655 ******* 2026-02-20 05:50:40.417276 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'fa7fdf54d9d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 05:50:32.090773', 'end': '2026-02-20 05:50:32.151020', 'delta': '0:00:00.060247', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fa7fdf54d9d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 05:50:40.417291 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '9edb2d04dfb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 05:50:32.661798', 'end': '2026-02-20 05:50:32.709507', 'delta': '0:00:00.047709', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9edb2d04dfb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 05:50:40.417313 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '18841505eff4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 05:50:33.541715', 'end': '2026-02-20 05:50:33.591265', 'delta': '0:00:00.049550', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['18841505eff4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 05:50:58.376037 | orchestrator | 2026-02-20 05:50:58.376130 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 05:50:58.376139 | orchestrator | Friday 20 February 2026 05:50:40 +0000 (0:00:01.233) 0:54:47.888 ******* 2026-02-20 05:50:58.376144 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:50:58.376150 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:50:58.376154 | orchestrator | 2026-02-20 05:50:58.376170 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 05:50:58.376175 | orchestrator | Friday 20 February 2026 05:50:41 +0000 (0:00:01.310) 0:54:49.199 ******* 2026-02-20 05:50:58.376180 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:50:58.376185 | orchestrator | 2026-02-20 05:50:58.376198 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 05:50:58.376249 | orchestrator | Friday 20 February 2026 05:50:42 +0000 (0:00:01.154) 0:54:50.353 ******* 2026-02-20 05:50:58.376255 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:50:58.376259 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:50:58.376264 | orchestrator | 2026-02-20 05:50:58.376269 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 05:50:58.376273 | orchestrator | Friday 20 February 2026 05:50:44 +0000 (0:00:01.137) 0:54:51.491 ******* 2026-02-20 05:50:58.376278 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:50:58.376283 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:50:58.376287 | orchestrator | 2026-02-20 05:50:58.376291 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:50:58.376295 | orchestrator | Friday 20 February 2026 05:50:46 +0000 (0:00:02.090) 0:54:53.581 ******* 2026-02-20 05:50:58.376315 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:50:58.376319 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:50:58.376324 | orchestrator | 2026-02-20 05:50:58.376328 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 05:50:58.376332 | orchestrator | Friday 20 February 2026 05:50:47 +0000 (0:00:01.193) 0:54:54.775 ******* 2026-02-20 05:50:58.376336 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:50:58.376340 | orchestrator | 2026-02-20 05:50:58.376344 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 05:50:58.376349 | orchestrator | Friday 20 February 2026 05:50:48 +0000 (0:00:01.107) 0:54:55.882 ******* 2026-02-20 05:50:58.376353 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:50:58.376357 | orchestrator | 2026-02-20 05:50:58.376361 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:50:58.376365 | orchestrator | Friday 20 February 2026 05:50:49 +0000 (0:00:01.170) 0:54:57.053 ******* 2026-02-20 05:50:58.376369 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:50:58.376373 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:50:58.376378 | orchestrator | 2026-02-20 05:50:58.376382 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 05:50:58.376386 | orchestrator | Friday 20 February 2026 05:50:50 +0000 (0:00:01.154) 0:54:58.207 ******* 2026-02-20 05:50:58.376391 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:50:58.376395 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:50:58.376399 | orchestrator | 2026-02-20 05:50:58.376403 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 05:50:58.376407 | orchestrator | Friday 20 February 2026 05:50:51 +0000 (0:00:01.232) 0:54:59.440 ******* 2026-02-20 05:50:58.376411 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:50:58.376415 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:50:58.376420 | orchestrator | 2026-02-20 05:50:58.376424 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 05:50:58.376428 | orchestrator | Friday 20 February 2026 05:50:53 +0000 (0:00:01.231) 0:55:00.671 ******* 2026-02-20 05:50:58.376432 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:50:58.376436 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:50:58.376440 | orchestrator | 2026-02-20 05:50:58.376445 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 05:50:58.376449 | orchestrator | Friday 20 February 2026 05:50:54 +0000 (0:00:01.212) 0:55:01.883 ******* 2026-02-20 05:50:58.376453 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:50:58.376457 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:50:58.376461 | orchestrator | 2026-02-20 05:50:58.376465 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 05:50:58.376470 | orchestrator | Friday 20 February 2026 05:50:55 +0000 (0:00:01.253) 0:55:03.137 ******* 2026-02-20 05:50:58.376474 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:50:58.376478 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:50:58.376482 | orchestrator | 2026-02-20 05:50:58.376486 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 05:50:58.376491 | orchestrator | Friday 20 February 2026 05:50:56 +0000 (0:00:01.220) 0:55:04.357 ******* 2026-02-20 05:50:58.376495 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:50:58.376500 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:50:58.376504 | orchestrator | 2026-02-20 05:50:58.376508 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 05:50:58.376512 | orchestrator | Friday 20 February 2026 05:50:58 +0000 (0:00:01.282) 0:55:05.640 ******* 2026-02-20 05:50:58.376518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:50:58.376543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd', 'dm-uuid-LVM-uad2V0OpLGdzdU3eEHWgkaxlNp1nXa7g5o0axnC3QdSAlB8AkjlVWSH5X44uoXlU'], 'uuids': ['931641c7-2345-4218-a67b-b8fcf36da2a6'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '528e4f8d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU']}})  2026-02-20 05:50:58.376550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca', 'scsi-SQEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f09aecfd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:50:58.376555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UBLLvV-8QQ0-xzoQ-2mnT-QJVN-4rdp-m3Ln3u', 'scsi-0QEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6', 'scsi-SQEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6dc65ba2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef']}})  2026-02-20 05:50:58.376560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:50:58.376565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:50:58.376570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 05:50:58.376575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:50:58.376587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T', 'dm-uuid-CRYPT-LUKS2-7f9663ba9e0d48338edb558cf7968427-GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 05:50:58.814595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:50:58.814713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef', 'dm-uuid-LVM-eZPdyLEJUA7yM8UG04kzUJPVXCx8J9VmGmbUf4SnTTCksjLBnYIEkOOeXJAJtO0T'], 'uuids': ['7f9663ba-9e0d-4833-8edb-558cf7968427'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6dc65ba2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T']}})  2026-02-20 05:50:58.814774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2YdZI3-6CQC-uR2v-c72I-I2Jf-rZdP-m7eIYn', 'scsi-0QEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289', 'scsi-SQEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '528e4f8d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd']}})  2026-02-20 05:50:58.814786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:50:58.814837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '801ae611', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:50:58.814870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:50:58.814880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:50:58.814889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU', 'dm-uuid-CRYPT-LUKS2-931641c723454218a67bb8fcf36da2a6-5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 05:50:58.814899 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:50:58.814910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:50:58.814919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2', 'dm-uuid-LVM-rogr1DxRDGs07Ii1eVUD20MBBAvjpXWjQ80pE2aubUFxb6wQZQZD6n80uN1wncJx'], 'uuids': ['22c82636-cfd1-4dcd-a18c-9fa46a681fb3'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '65f3eac9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx']}})  2026-02-20 05:50:58.814936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25', 'scsi-SQEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '072c6774', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:50:58.814956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-8hqq24-gM8B-HGCg-NRBz-x5vV-O3o1-WzQB2w', 'scsi-0QEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737', 'scsi-SQEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4d1d8767', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f']}})  2026-02-20 05:51:00.274339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:51:00.274431 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:51:00.274439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 05:51:00.274446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:51:00.274451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6', 'dm-uuid-CRYPT-LUKS2-6ffa85ca31b34ffaa66b3499bdbb76c6-jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 05:51:00.274470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:51:00.274474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f', 'dm-uuid-LVM-N05Drt313VJO8ej73Med2uhl7Q3NfCOLjUHptgJks7soG8vnX2SubfjS5bfpvFW6'], 'uuids': ['6ffa85ca-31b3-4ffa-a66b-3499bdbb76c6'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4d1d8767', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6']}})  2026-02-20 05:51:00.274502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-22rEKG-WFfn-O0Qj-YL45-EVdC-t8yv-UVN2TG', 'scsi-0QEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2', 'scsi-SQEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '65f3eac9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2']}})  2026-02-20 05:51:00.274507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:51:00.274514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd0ac2488', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:51:00.274523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:51:00.274527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:51:00.274538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx', 'dm-uuid-CRYPT-LUKS2-22c82636cfd14dcda18c9fa46a681fb3-Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 05:51:00.500950 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:51:00.501031 | orchestrator | 2026-02-20 05:51:00.501041 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 05:51:00.501049 | orchestrator | Friday 20 February 2026 05:51:00 +0000 (0:00:02.107) 0:55:07.748 ******* 2026-02-20 05:51:00.501058 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.501068 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd', 'dm-uuid-LVM-uad2V0OpLGdzdU3eEHWgkaxlNp1nXa7g5o0axnC3QdSAlB8AkjlVWSH5X44uoXlU'], 'uuids': ['931641c7-2345-4218-a67b-b8fcf36da2a6'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '528e4f8d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.501075 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca', 'scsi-SQEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f09aecfd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.501099 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UBLLvV-8QQ0-xzoQ-2mnT-QJVN-4rdp-m3Ln3u', 'scsi-0QEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6', 'scsi-SQEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6dc65ba2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.501131 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.501139 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.501146 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.501152 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.501162 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T', 'dm-uuid-CRYPT-LUKS2-7f9663ba9e0d48338edb558cf7968427-GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.501169 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.501178 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.501189 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2', 'dm-uuid-LVM-rogr1DxRDGs07Ii1eVUD20MBBAvjpXWjQ80pE2aubUFxb6wQZQZD6n80uN1wncJx'], 'uuids': ['22c82636-cfd1-4dcd-a18c-9fa46a681fb3'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '65f3eac9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.548875 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef', 'dm-uuid-LVM-eZPdyLEJUA7yM8UG04kzUJPVXCx8J9VmGmbUf4SnTTCksjLBnYIEkOOeXJAJtO0T'], 'uuids': ['7f9663ba-9e0d-4833-8edb-558cf7968427'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6dc65ba2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.548987 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25', 'scsi-SQEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '072c6774', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.549003 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2YdZI3-6CQC-uR2v-c72I-I2Jf-rZdP-m7eIYn', 'scsi-0QEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289', 'scsi-SQEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '528e4f8d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.549033 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-8hqq24-gM8B-HGCg-NRBz-x5vV-O3o1-WzQB2w', 'scsi-0QEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737', 'scsi-SQEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4d1d8767', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.549068 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.549080 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.549105 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '801ae611', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.549117 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.549134 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.638817 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.638944 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.638961 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.638988 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU', 'dm-uuid-CRYPT-LUKS2-931641c723454218a67bb8fcf36da2a6-5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.639001 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6', 'dm-uuid-CRYPT-LUKS2-6ffa85ca31b34ffaa66b3499bdbb76c6-jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.639013 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:51:00.639045 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.639103 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f', 'dm-uuid-LVM-N05Drt313VJO8ej73Med2uhl7Q3NfCOLjUHptgJks7soG8vnX2SubfjS5bfpvFW6'], 'uuids': ['6ffa85ca-31b3-4ffa-a66b-3499bdbb76c6'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4d1d8767', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.639118 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-22rEKG-WFfn-O0Qj-YL45-EVdC-t8yv-UVN2TG', 'scsi-0QEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2', 'scsi-SQEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '65f3eac9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.639139 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:00.639163 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd0ac2488', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:29.563866 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:29.563985 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:29.564020 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx', 'dm-uuid-CRYPT-LUKS2-22c82636cfd14dcda18c9fa46a681fb3-Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:51:29.564035 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:51:29.564049 | orchestrator | 2026-02-20 05:51:29.564061 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 05:51:29.564074 | orchestrator | Friday 20 February 2026 05:51:01 +0000 (0:00:01.566) 0:55:09.314 ******* 2026-02-20 05:51:29.564085 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:51:29.564097 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:51:29.564108 | orchestrator | 2026-02-20 05:51:29.564119 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 05:51:29.564130 | orchestrator | Friday 20 February 2026 05:51:03 +0000 (0:00:01.722) 0:55:11.037 ******* 2026-02-20 05:51:29.564141 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:51:29.564152 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:51:29.564163 | orchestrator | 2026-02-20 05:51:29.564174 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:51:29.564185 | orchestrator | Friday 20 February 2026 05:51:04 +0000 (0:00:01.290) 0:55:12.328 ******* 2026-02-20 05:51:29.564196 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:51:29.564231 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:51:29.564243 | orchestrator | 2026-02-20 05:51:29.564254 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:51:29.564265 | orchestrator | Friday 20 February 2026 05:51:06 +0000 (0:00:01.698) 0:55:14.027 ******* 2026-02-20 05:51:29.564276 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:51:29.564286 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:51:29.564297 | orchestrator | 2026-02-20 05:51:29.564308 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:51:29.564319 | orchestrator | Friday 20 February 2026 05:51:07 +0000 (0:00:01.248) 0:55:15.276 ******* 2026-02-20 05:51:29.564329 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:51:29.564340 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:51:29.564351 | orchestrator | 2026-02-20 05:51:29.564362 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:51:29.564373 | orchestrator | Friday 20 February 2026 05:51:09 +0000 (0:00:01.799) 0:55:17.076 ******* 2026-02-20 05:51:29.564383 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:51:29.564395 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:51:29.564409 | orchestrator | 2026-02-20 05:51:29.564421 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 05:51:29.564434 | orchestrator | Friday 20 February 2026 05:51:10 +0000 (0:00:01.279) 0:55:18.356 ******* 2026-02-20 05:51:29.564446 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-20 05:51:29.564459 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-20 05:51:29.564471 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-20 05:51:29.564484 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-20 05:51:29.564496 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-20 05:51:29.564509 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-20 05:51:29.564521 | orchestrator | 2026-02-20 05:51:29.564533 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 05:51:29.564546 | orchestrator | Friday 20 February 2026 05:51:12 +0000 (0:00:01.830) 0:55:20.187 ******* 2026-02-20 05:51:29.564578 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-20 05:51:29.564591 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-20 05:51:29.564603 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-20 05:51:29.564615 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:51:29.564628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-20 05:51:29.564641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-20 05:51:29.564653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-20 05:51:29.564666 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:51:29.564678 | orchestrator | 2026-02-20 05:51:29.564729 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 05:51:29.564740 | orchestrator | Friday 20 February 2026 05:51:14 +0000 (0:00:01.335) 0:55:21.523 ******* 2026-02-20 05:51:29.564752 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4, testbed-node-3 2026-02-20 05:51:29.564763 | orchestrator | 2026-02-20 05:51:29.564775 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:51:29.564787 | orchestrator | Friday 20 February 2026 05:51:15 +0000 (0:00:01.458) 0:55:22.981 ******* 2026-02-20 05:51:29.564798 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:51:29.564809 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:51:29.564820 | orchestrator | 2026-02-20 05:51:29.564831 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:51:29.564842 | orchestrator | Friday 20 February 2026 05:51:16 +0000 (0:00:01.195) 0:55:24.176 ******* 2026-02-20 05:51:29.564853 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:51:29.564873 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:51:29.564884 | orchestrator | 2026-02-20 05:51:29.564895 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:51:29.564906 | orchestrator | Friday 20 February 2026 05:51:17 +0000 (0:00:01.235) 0:55:25.411 ******* 2026-02-20 05:51:29.564917 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:51:29.564928 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:51:29.564939 | orchestrator | 2026-02-20 05:51:29.564949 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:51:29.564960 | orchestrator | Friday 20 February 2026 05:51:19 +0000 (0:00:01.246) 0:55:26.658 ******* 2026-02-20 05:51:29.564971 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:51:29.564989 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:51:29.565000 | orchestrator | 2026-02-20 05:51:29.565011 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:51:29.565022 | orchestrator | Friday 20 February 2026 05:51:20 +0000 (0:00:01.383) 0:55:28.042 ******* 2026-02-20 05:51:29.565032 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 05:51:29.565043 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 05:51:29.565054 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 05:51:29.565065 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:51:29.565076 | orchestrator | 2026-02-20 05:51:29.565086 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:51:29.565097 | orchestrator | Friday 20 February 2026 05:51:21 +0000 (0:00:01.365) 0:55:29.408 ******* 2026-02-20 05:51:29.565108 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 05:51:29.565119 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 05:51:29.565129 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 05:51:29.565140 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:51:29.565151 | orchestrator | 2026-02-20 05:51:29.565162 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:51:29.565172 | orchestrator | Friday 20 February 2026 05:51:23 +0000 (0:00:01.406) 0:55:30.815 ******* 2026-02-20 05:51:29.565183 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 05:51:29.565194 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 05:51:29.565205 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 05:51:29.565215 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:51:29.565226 | orchestrator | 2026-02-20 05:51:29.565237 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:51:29.565248 | orchestrator | Friday 20 February 2026 05:51:24 +0000 (0:00:01.390) 0:55:32.205 ******* 2026-02-20 05:51:29.565259 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:51:29.565269 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:51:29.565280 | orchestrator | 2026-02-20 05:51:29.565291 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:51:29.565302 | orchestrator | Friday 20 February 2026 05:51:26 +0000 (0:00:01.309) 0:55:33.514 ******* 2026-02-20 05:51:29.565312 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-20 05:51:29.565323 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-20 05:51:29.565334 | orchestrator | 2026-02-20 05:51:29.565345 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 05:51:29.565356 | orchestrator | Friday 20 February 2026 05:51:27 +0000 (0:00:01.427) 0:55:34.942 ******* 2026-02-20 05:51:29.565367 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:51:29.565378 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:51:29.565389 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:51:29.565400 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:51:29.565419 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-20 05:51:29.565430 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:51:29.565448 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:52:14.302405 | orchestrator | 2026-02-20 05:52:14.302506 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 05:52:14.302519 | orchestrator | Friday 20 February 2026 05:51:29 +0000 (0:00:02.080) 0:55:37.022 ******* 2026-02-20 05:52:14.302529 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:52:14.302538 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:52:14.302546 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:52:14.302555 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 05:52:14.302564 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-20 05:52:14.302572 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:52:14.302580 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:52:14.302589 | orchestrator | 2026-02-20 05:52:14.302597 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-02-20 05:52:14.302605 | orchestrator | Friday 20 February 2026 05:51:32 +0000 (0:00:03.221) 0:55:40.244 ******* 2026-02-20 05:52:14.302685 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.302697 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.302705 | orchestrator | 2026-02-20 05:52:14.302714 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 05:52:14.302722 | orchestrator | Friday 20 February 2026 05:51:33 +0000 (0:00:01.224) 0:55:41.468 ******* 2026-02-20 05:52:14.302730 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-3 2026-02-20 05:52:14.302739 | orchestrator | 2026-02-20 05:52:14.302747 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 05:52:14.302755 | orchestrator | Friday 20 February 2026 05:51:35 +0000 (0:00:01.186) 0:55:42.654 ******* 2026-02-20 05:52:14.302763 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-3 2026-02-20 05:52:14.302771 | orchestrator | 2026-02-20 05:52:14.302779 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 05:52:14.302801 | orchestrator | Friday 20 February 2026 05:51:36 +0000 (0:00:01.222) 0:55:43.877 ******* 2026-02-20 05:52:14.302810 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.302818 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.302827 | orchestrator | 2026-02-20 05:52:14.302834 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 05:52:14.302843 | orchestrator | Friday 20 February 2026 05:51:37 +0000 (0:00:01.587) 0:55:45.464 ******* 2026-02-20 05:52:14.302852 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:52:14.302860 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:52:14.302868 | orchestrator | 2026-02-20 05:52:14.302876 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 05:52:14.302884 | orchestrator | Friday 20 February 2026 05:51:39 +0000 (0:00:01.660) 0:55:47.124 ******* 2026-02-20 05:52:14.302892 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:52:14.302900 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:52:14.302908 | orchestrator | 2026-02-20 05:52:14.302916 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 05:52:14.302966 | orchestrator | Friday 20 February 2026 05:51:41 +0000 (0:00:01.631) 0:55:48.755 ******* 2026-02-20 05:52:14.302976 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:52:14.302985 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:52:14.302995 | orchestrator | 2026-02-20 05:52:14.303026 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 05:52:14.303036 | orchestrator | Friday 20 February 2026 05:51:42 +0000 (0:00:01.695) 0:55:50.451 ******* 2026-02-20 05:52:14.303045 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.303055 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.303064 | orchestrator | 2026-02-20 05:52:14.303073 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 05:52:14.303082 | orchestrator | Friday 20 February 2026 05:51:44 +0000 (0:00:01.286) 0:55:51.738 ******* 2026-02-20 05:52:14.303092 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.303101 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.303111 | orchestrator | 2026-02-20 05:52:14.303120 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 05:52:14.303129 | orchestrator | Friday 20 February 2026 05:51:45 +0000 (0:00:01.260) 0:55:52.999 ******* 2026-02-20 05:52:14.303139 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.303148 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.303158 | orchestrator | 2026-02-20 05:52:14.303167 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 05:52:14.303175 | orchestrator | Friday 20 February 2026 05:51:46 +0000 (0:00:01.243) 0:55:54.242 ******* 2026-02-20 05:52:14.303183 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:52:14.303191 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:52:14.303204 | orchestrator | 2026-02-20 05:52:14.303218 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 05:52:14.303230 | orchestrator | Friday 20 February 2026 05:51:48 +0000 (0:00:01.724) 0:55:55.967 ******* 2026-02-20 05:52:14.303242 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:52:14.303255 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:52:14.303269 | orchestrator | 2026-02-20 05:52:14.303282 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 05:52:14.303295 | orchestrator | Friday 20 February 2026 05:51:50 +0000 (0:00:01.681) 0:55:57.649 ******* 2026-02-20 05:52:14.303309 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.303317 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.303325 | orchestrator | 2026-02-20 05:52:14.303334 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 05:52:14.303342 | orchestrator | Friday 20 February 2026 05:51:51 +0000 (0:00:01.244) 0:55:58.894 ******* 2026-02-20 05:52:14.303350 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.303374 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.303383 | orchestrator | 2026-02-20 05:52:14.303391 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 05:52:14.303399 | orchestrator | Friday 20 February 2026 05:51:52 +0000 (0:00:01.256) 0:56:00.150 ******* 2026-02-20 05:52:14.303407 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:52:14.303415 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:52:14.303423 | orchestrator | 2026-02-20 05:52:14.303431 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 05:52:14.303439 | orchestrator | Friday 20 February 2026 05:51:53 +0000 (0:00:01.224) 0:56:01.374 ******* 2026-02-20 05:52:14.303447 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:52:14.303455 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:52:14.303464 | orchestrator | 2026-02-20 05:52:14.303472 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 05:52:14.303480 | orchestrator | Friday 20 February 2026 05:51:55 +0000 (0:00:01.281) 0:56:02.656 ******* 2026-02-20 05:52:14.303488 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:52:14.303496 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:52:14.303504 | orchestrator | 2026-02-20 05:52:14.303512 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 05:52:14.303520 | orchestrator | Friday 20 February 2026 05:51:56 +0000 (0:00:01.318) 0:56:03.974 ******* 2026-02-20 05:52:14.303528 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.303536 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.303551 | orchestrator | 2026-02-20 05:52:14.303559 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 05:52:14.303567 | orchestrator | Friday 20 February 2026 05:51:57 +0000 (0:00:01.297) 0:56:05.272 ******* 2026-02-20 05:52:14.303575 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.303583 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.303591 | orchestrator | 2026-02-20 05:52:14.303599 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 05:52:14.303607 | orchestrator | Friday 20 February 2026 05:51:59 +0000 (0:00:01.248) 0:56:06.520 ******* 2026-02-20 05:52:14.303636 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.303644 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.303653 | orchestrator | 2026-02-20 05:52:14.303661 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 05:52:14.303669 | orchestrator | Friday 20 February 2026 05:52:00 +0000 (0:00:01.290) 0:56:07.811 ******* 2026-02-20 05:52:14.303677 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:52:14.303685 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:52:14.303693 | orchestrator | 2026-02-20 05:52:14.303735 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 05:52:14.303744 | orchestrator | Friday 20 February 2026 05:52:01 +0000 (0:00:01.254) 0:56:09.065 ******* 2026-02-20 05:52:14.303752 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:52:14.303760 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:52:14.303769 | orchestrator | 2026-02-20 05:52:14.303777 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 05:52:14.303785 | orchestrator | Friday 20 February 2026 05:52:02 +0000 (0:00:01.211) 0:56:10.276 ******* 2026-02-20 05:52:14.303793 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.303801 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.303809 | orchestrator | 2026-02-20 05:52:14.303818 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 05:52:14.303826 | orchestrator | Friday 20 February 2026 05:52:04 +0000 (0:00:01.523) 0:56:11.799 ******* 2026-02-20 05:52:14.303834 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.303842 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.303850 | orchestrator | 2026-02-20 05:52:14.303858 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 05:52:14.303866 | orchestrator | Friday 20 February 2026 05:52:05 +0000 (0:00:01.189) 0:56:12.989 ******* 2026-02-20 05:52:14.303874 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.303883 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.303891 | orchestrator | 2026-02-20 05:52:14.303899 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 05:52:14.303907 | orchestrator | Friday 20 February 2026 05:52:06 +0000 (0:00:01.330) 0:56:14.319 ******* 2026-02-20 05:52:14.303915 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.303923 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.303931 | orchestrator | 2026-02-20 05:52:14.303939 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 05:52:14.303947 | orchestrator | Friday 20 February 2026 05:52:08 +0000 (0:00:01.301) 0:56:15.621 ******* 2026-02-20 05:52:14.303956 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.303964 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.303972 | orchestrator | 2026-02-20 05:52:14.303980 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 05:52:14.303988 | orchestrator | Friday 20 February 2026 05:52:09 +0000 (0:00:01.214) 0:56:16.835 ******* 2026-02-20 05:52:14.303996 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.304004 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.304012 | orchestrator | 2026-02-20 05:52:14.304020 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 05:52:14.304029 | orchestrator | Friday 20 February 2026 05:52:10 +0000 (0:00:01.284) 0:56:18.119 ******* 2026-02-20 05:52:14.304043 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.304051 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.304060 | orchestrator | 2026-02-20 05:52:14.304068 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 05:52:14.304076 | orchestrator | Friday 20 February 2026 05:52:11 +0000 (0:00:01.218) 0:56:19.338 ******* 2026-02-20 05:52:14.304084 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.304092 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.304100 | orchestrator | 2026-02-20 05:52:14.304109 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 05:52:14.304117 | orchestrator | Friday 20 February 2026 05:52:13 +0000 (0:00:01.233) 0:56:20.572 ******* 2026-02-20 05:52:14.304125 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:14.304133 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:14.304141 | orchestrator | 2026-02-20 05:52:14.304155 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 05:52:59.118295 | orchestrator | Friday 20 February 2026 05:52:14 +0000 (0:00:01.200) 0:56:21.772 ******* 2026-02-20 05:52:59.118471 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.118499 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.118517 | orchestrator | 2026-02-20 05:52:59.118534 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 05:52:59.118609 | orchestrator | Friday 20 February 2026 05:52:15 +0000 (0:00:01.268) 0:56:23.040 ******* 2026-02-20 05:52:59.118628 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.118646 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.118662 | orchestrator | 2026-02-20 05:52:59.118680 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 05:52:59.118696 | orchestrator | Friday 20 February 2026 05:52:16 +0000 (0:00:01.404) 0:56:24.445 ******* 2026-02-20 05:52:59.118713 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.118730 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.118747 | orchestrator | 2026-02-20 05:52:59.118764 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 05:52:59.118780 | orchestrator | Friday 20 February 2026 05:52:18 +0000 (0:00:01.238) 0:56:25.684 ******* 2026-02-20 05:52:59.118797 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:52:59.118808 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:52:59.118818 | orchestrator | 2026-02-20 05:52:59.118829 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 05:52:59.118842 | orchestrator | Friday 20 February 2026 05:52:20 +0000 (0:00:02.224) 0:56:27.908 ******* 2026-02-20 05:52:59.118854 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:52:59.118867 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:52:59.118880 | orchestrator | 2026-02-20 05:52:59.118892 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 05:52:59.118905 | orchestrator | Friday 20 February 2026 05:52:22 +0000 (0:00:02.358) 0:56:30.267 ******* 2026-02-20 05:52:59.118919 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4, testbed-node-3 2026-02-20 05:52:59.118932 | orchestrator | 2026-02-20 05:52:59.118944 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-20 05:52:59.118957 | orchestrator | Friday 20 February 2026 05:52:24 +0000 (0:00:01.341) 0:56:31.609 ******* 2026-02-20 05:52:59.118970 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.118983 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.118996 | orchestrator | 2026-02-20 05:52:59.119026 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-20 05:52:59.119040 | orchestrator | Friday 20 February 2026 05:52:25 +0000 (0:00:01.346) 0:56:32.956 ******* 2026-02-20 05:52:59.119054 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.119066 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.119079 | orchestrator | 2026-02-20 05:52:59.119091 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-20 05:52:59.119130 | orchestrator | Friday 20 February 2026 05:52:26 +0000 (0:00:01.205) 0:56:34.162 ******* 2026-02-20 05:52:59.119142 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 05:52:59.119153 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 05:52:59.119164 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 05:52:59.119175 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 05:52:59.119185 | orchestrator | 2026-02-20 05:52:59.119196 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-20 05:52:59.119207 | orchestrator | Friday 20 February 2026 05:52:28 +0000 (0:00:01.890) 0:56:36.052 ******* 2026-02-20 05:52:59.119218 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:52:59.119229 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:52:59.119240 | orchestrator | 2026-02-20 05:52:59.119251 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-20 05:52:59.119262 | orchestrator | Friday 20 February 2026 05:52:30 +0000 (0:00:01.928) 0:56:37.980 ******* 2026-02-20 05:52:59.119273 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.119284 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.119295 | orchestrator | 2026-02-20 05:52:59.119305 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-20 05:52:59.119316 | orchestrator | Friday 20 February 2026 05:52:31 +0000 (0:00:01.247) 0:56:39.227 ******* 2026-02-20 05:52:59.119327 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.119338 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.119352 | orchestrator | 2026-02-20 05:52:59.119371 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 05:52:59.119390 | orchestrator | Friday 20 February 2026 05:52:32 +0000 (0:00:01.218) 0:56:40.446 ******* 2026-02-20 05:52:59.119408 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.119425 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.119443 | orchestrator | 2026-02-20 05:52:59.119463 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 05:52:59.119481 | orchestrator | Friday 20 February 2026 05:52:34 +0000 (0:00:01.219) 0:56:41.666 ******* 2026-02-20 05:52:59.119499 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4, testbed-node-3 2026-02-20 05:52:59.119518 | orchestrator | 2026-02-20 05:52:59.119537 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-20 05:52:59.119586 | orchestrator | Friday 20 February 2026 05:52:35 +0000 (0:00:01.235) 0:56:42.902 ******* 2026-02-20 05:52:59.119605 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:52:59.119623 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:52:59.119642 | orchestrator | 2026-02-20 05:52:59.119660 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-20 05:52:59.119678 | orchestrator | Friday 20 February 2026 05:52:37 +0000 (0:00:01.994) 0:56:44.896 ******* 2026-02-20 05:52:59.119696 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 05:52:59.119738 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 05:52:59.119760 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 05:52:59.119779 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.119797 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 05:52:59.119815 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 05:52:59.119832 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 05:52:59.119849 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.119865 | orchestrator | 2026-02-20 05:52:59.119881 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-20 05:52:59.119915 | orchestrator | Friday 20 February 2026 05:52:38 +0000 (0:00:01.235) 0:56:46.132 ******* 2026-02-20 05:52:59.119935 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.119953 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.119972 | orchestrator | 2026-02-20 05:52:59.119987 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-20 05:52:59.119998 | orchestrator | Friday 20 February 2026 05:52:39 +0000 (0:00:01.212) 0:56:47.344 ******* 2026-02-20 05:52:59.120009 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.120020 | orchestrator | 2026-02-20 05:52:59.120031 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-20 05:52:59.120048 | orchestrator | Friday 20 February 2026 05:52:41 +0000 (0:00:01.170) 0:56:48.514 ******* 2026-02-20 05:52:59.120065 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.120084 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.120102 | orchestrator | 2026-02-20 05:52:59.120120 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-20 05:52:59.120138 | orchestrator | Friday 20 February 2026 05:52:42 +0000 (0:00:01.312) 0:56:49.827 ******* 2026-02-20 05:52:59.120150 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.120161 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.120172 | orchestrator | 2026-02-20 05:52:59.120183 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-20 05:52:59.120194 | orchestrator | Friday 20 February 2026 05:52:43 +0000 (0:00:01.236) 0:56:51.063 ******* 2026-02-20 05:52:59.120205 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.120216 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.120226 | orchestrator | 2026-02-20 05:52:59.120246 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 05:52:59.120258 | orchestrator | Friday 20 February 2026 05:52:44 +0000 (0:00:01.262) 0:56:52.326 ******* 2026-02-20 05:52:59.120269 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:52:59.120280 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:52:59.120291 | orchestrator | 2026-02-20 05:52:59.120302 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 05:52:59.120313 | orchestrator | Friday 20 February 2026 05:52:47 +0000 (0:00:02.656) 0:56:54.982 ******* 2026-02-20 05:52:59.120324 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:52:59.120335 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:52:59.120346 | orchestrator | 2026-02-20 05:52:59.120357 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 05:52:59.120368 | orchestrator | Friday 20 February 2026 05:52:48 +0000 (0:00:01.240) 0:56:56.223 ******* 2026-02-20 05:52:59.120379 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4, testbed-node-3 2026-02-20 05:52:59.120392 | orchestrator | 2026-02-20 05:52:59.120403 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-20 05:52:59.120414 | orchestrator | Friday 20 February 2026 05:52:49 +0000 (0:00:01.245) 0:56:57.469 ******* 2026-02-20 05:52:59.120425 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.120440 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.120459 | orchestrator | 2026-02-20 05:52:59.120477 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-20 05:52:59.120495 | orchestrator | Friday 20 February 2026 05:52:51 +0000 (0:00:01.250) 0:56:58.719 ******* 2026-02-20 05:52:59.120512 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.120532 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.120616 | orchestrator | 2026-02-20 05:52:59.120630 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-20 05:52:59.120641 | orchestrator | Friday 20 February 2026 05:52:52 +0000 (0:00:01.270) 0:56:59.990 ******* 2026-02-20 05:52:59.120652 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.120663 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.120674 | orchestrator | 2026-02-20 05:52:59.120685 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-20 05:52:59.120707 | orchestrator | Friday 20 February 2026 05:52:53 +0000 (0:00:01.280) 0:57:01.270 ******* 2026-02-20 05:52:59.120718 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.120729 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.120740 | orchestrator | 2026-02-20 05:52:59.120751 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-20 05:52:59.120762 | orchestrator | Friday 20 February 2026 05:52:55 +0000 (0:00:01.531) 0:57:02.802 ******* 2026-02-20 05:52:59.120773 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.120784 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.120795 | orchestrator | 2026-02-20 05:52:59.120806 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-20 05:52:59.120817 | orchestrator | Friday 20 February 2026 05:52:56 +0000 (0:00:01.276) 0:57:04.079 ******* 2026-02-20 05:52:59.120828 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.120839 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.120864 | orchestrator | 2026-02-20 05:52:59.120888 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-20 05:52:59.120908 | orchestrator | Friday 20 February 2026 05:52:57 +0000 (0:00:01.253) 0:57:05.333 ******* 2026-02-20 05:52:59.120926 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:52:59.120944 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:52:59.120961 | orchestrator | 2026-02-20 05:52:59.120994 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-20 05:53:41.212692 | orchestrator | Friday 20 February 2026 05:52:59 +0000 (0:00:01.257) 0:57:06.590 ******* 2026-02-20 05:53:41.212832 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:53:41.212859 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:53:41.212877 | orchestrator | 2026-02-20 05:53:41.212894 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-20 05:53:41.212910 | orchestrator | Friday 20 February 2026 05:53:00 +0000 (0:00:01.280) 0:57:07.871 ******* 2026-02-20 05:53:41.212925 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:53:41.212942 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:53:41.212958 | orchestrator | 2026-02-20 05:53:41.212974 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 05:53:41.212990 | orchestrator | Friday 20 February 2026 05:53:01 +0000 (0:00:01.253) 0:57:09.125 ******* 2026-02-20 05:53:41.213091 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4, testbed-node-3 2026-02-20 05:53:41.213107 | orchestrator | 2026-02-20 05:53:41.213123 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-20 05:53:41.213139 | orchestrator | Friday 20 February 2026 05:53:03 +0000 (0:00:01.549) 0:57:10.675 ******* 2026-02-20 05:53:41.213153 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-20 05:53:41.213170 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-20 05:53:41.213185 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-20 05:53:41.213200 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-20 05:53:41.213215 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-20 05:53:41.213230 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-20 05:53:41.213245 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-20 05:53:41.213260 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-20 05:53:41.213273 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-20 05:53:41.213288 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-20 05:53:41.213303 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-20 05:53:41.213317 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-20 05:53:41.213331 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-20 05:53:41.213365 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-20 05:53:41.213410 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-20 05:53:41.213426 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-20 05:53:41.213440 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 05:53:41.213453 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 05:53:41.213467 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 05:53:41.213479 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 05:53:41.213518 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 05:53:41.213530 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 05:53:41.213544 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 05:53:41.213557 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 05:53:41.213569 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 05:53:41.213580 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 05:53:41.213594 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 05:53:41.213607 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 05:53:41.213620 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-20 05:53:41.213634 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-20 05:53:41.213647 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-20 05:53:41.213660 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-20 05:53:41.213674 | orchestrator | 2026-02-20 05:53:41.213688 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 05:53:41.213701 | orchestrator | Friday 20 February 2026 05:53:10 +0000 (0:00:07.067) 0:57:17.742 ******* 2026-02-20 05:53:41.213715 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4, testbed-node-3 2026-02-20 05:53:41.213728 | orchestrator | 2026-02-20 05:53:41.213742 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-20 05:53:41.213757 | orchestrator | Friday 20 February 2026 05:53:11 +0000 (0:00:01.263) 0:57:19.006 ******* 2026-02-20 05:53:41.213773 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 05:53:41.213789 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 05:53:41.213805 | orchestrator | 2026-02-20 05:53:41.213819 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-20 05:53:41.213834 | orchestrator | Friday 20 February 2026 05:53:13 +0000 (0:00:01.676) 0:57:20.682 ******* 2026-02-20 05:53:41.213850 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 05:53:41.213865 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 05:53:41.213880 | orchestrator | 2026-02-20 05:53:41.213895 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 05:53:41.213934 | orchestrator | Friday 20 February 2026 05:53:15 +0000 (0:00:02.132) 0:57:22.815 ******* 2026-02-20 05:53:41.213951 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:53:41.213964 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:53:41.213977 | orchestrator | 2026-02-20 05:53:41.213990 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 05:53:41.214004 | orchestrator | Friday 20 February 2026 05:53:16 +0000 (0:00:01.221) 0:57:24.037 ******* 2026-02-20 05:53:41.214084 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:53:41.214105 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:53:41.214119 | orchestrator | 2026-02-20 05:53:41.214134 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 05:53:41.214162 | orchestrator | Friday 20 February 2026 05:53:17 +0000 (0:00:01.241) 0:57:25.279 ******* 2026-02-20 05:53:41.214177 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:53:41.214192 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:53:41.214206 | orchestrator | 2026-02-20 05:53:41.214220 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 05:53:41.214234 | orchestrator | Friday 20 February 2026 05:53:19 +0000 (0:00:01.233) 0:57:26.512 ******* 2026-02-20 05:53:41.214248 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:53:41.214263 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:53:41.214277 | orchestrator | 2026-02-20 05:53:41.214291 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 05:53:41.214304 | orchestrator | Friday 20 February 2026 05:53:20 +0000 (0:00:01.241) 0:57:27.754 ******* 2026-02-20 05:53:41.214319 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:53:41.214333 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:53:41.214346 | orchestrator | 2026-02-20 05:53:41.214360 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 05:53:41.214375 | orchestrator | Friday 20 February 2026 05:53:21 +0000 (0:00:01.177) 0:57:28.932 ******* 2026-02-20 05:53:41.214389 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:53:41.214403 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:53:41.214417 | orchestrator | 2026-02-20 05:53:41.214431 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 05:53:41.214446 | orchestrator | Friday 20 February 2026 05:53:22 +0000 (0:00:01.228) 0:57:30.161 ******* 2026-02-20 05:53:41.214460 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:53:41.214483 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:53:41.214520 | orchestrator | 2026-02-20 05:53:41.214534 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 05:53:41.214547 | orchestrator | Friday 20 February 2026 05:53:24 +0000 (0:00:01.498) 0:57:31.660 ******* 2026-02-20 05:53:41.214562 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:53:41.214573 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:53:41.214585 | orchestrator | 2026-02-20 05:53:41.214599 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 05:53:41.214611 | orchestrator | Friday 20 February 2026 05:53:25 +0000 (0:00:01.248) 0:57:32.908 ******* 2026-02-20 05:53:41.214623 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:53:41.214635 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:53:41.214648 | orchestrator | 2026-02-20 05:53:41.214661 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 05:53:41.214675 | orchestrator | Friday 20 February 2026 05:53:26 +0000 (0:00:01.222) 0:57:34.131 ******* 2026-02-20 05:53:41.214688 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:53:41.214700 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:53:41.214714 | orchestrator | 2026-02-20 05:53:41.214727 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 05:53:41.214741 | orchestrator | Friday 20 February 2026 05:53:27 +0000 (0:00:01.234) 0:57:35.365 ******* 2026-02-20 05:53:41.214754 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:53:41.214767 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:53:41.214781 | orchestrator | 2026-02-20 05:53:41.214794 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 05:53:41.214807 | orchestrator | Friday 20 February 2026 05:53:29 +0000 (0:00:01.282) 0:57:36.647 ******* 2026-02-20 05:53:41.214820 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-20 05:53:41.214833 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-20 05:53:41.214846 | orchestrator | 2026-02-20 05:53:41.214859 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 05:53:41.214885 | orchestrator | Friday 20 February 2026 05:53:34 +0000 (0:00:05.021) 0:57:41.669 ******* 2026-02-20 05:53:41.214898 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 05:53:41.214912 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 05:53:41.214926 | orchestrator | 2026-02-20 05:53:41.214940 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 05:53:41.214954 | orchestrator | Friday 20 February 2026 05:53:35 +0000 (0:00:01.314) 0:57:42.983 ******* 2026-02-20 05:53:41.214970 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-20 05:53:41.215002 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-20 05:54:29.244995 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-20 05:54:29.245109 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-20 05:54:29.245126 | orchestrator | 2026-02-20 05:54:29.245140 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 05:54:29.245153 | orchestrator | Friday 20 February 2026 05:53:41 +0000 (0:00:05.696) 0:57:48.680 ******* 2026-02-20 05:54:29.245164 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:54:29.245177 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:54:29.245188 | orchestrator | 2026-02-20 05:54:29.245199 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 05:54:29.245210 | orchestrator | Friday 20 February 2026 05:53:42 +0000 (0:00:01.316) 0:57:49.996 ******* 2026-02-20 05:54:29.245221 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:54:29.245231 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:54:29.245242 | orchestrator | 2026-02-20 05:54:29.245254 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:54:29.245266 | orchestrator | Friday 20 February 2026 05:53:43 +0000 (0:00:01.238) 0:57:51.234 ******* 2026-02-20 05:54:29.245277 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:54:29.245288 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:54:29.245299 | orchestrator | 2026-02-20 05:54:29.245319 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:54:29.245330 | orchestrator | Friday 20 February 2026 05:53:45 +0000 (0:00:01.283) 0:57:52.518 ******* 2026-02-20 05:54:29.245341 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:54:29.245352 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:54:29.245363 | orchestrator | 2026-02-20 05:54:29.245374 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:54:29.245385 | orchestrator | Friday 20 February 2026 05:53:46 +0000 (0:00:01.313) 0:57:53.832 ******* 2026-02-20 05:54:29.245396 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:54:29.245407 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:54:29.245476 | orchestrator | 2026-02-20 05:54:29.245498 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:54:29.245518 | orchestrator | Friday 20 February 2026 05:53:47 +0000 (0:00:01.225) 0:57:55.058 ******* 2026-02-20 05:54:29.245538 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:54:29.245553 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:29.245565 | orchestrator | 2026-02-20 05:54:29.245578 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:54:29.245591 | orchestrator | Friday 20 February 2026 05:53:49 +0000 (0:00:01.737) 0:57:56.795 ******* 2026-02-20 05:54:29.245604 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 05:54:29.245616 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 05:54:29.245629 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 05:54:29.245641 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:54:29.245654 | orchestrator | 2026-02-20 05:54:29.245667 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:54:29.245679 | orchestrator | Friday 20 February 2026 05:53:50 +0000 (0:00:01.404) 0:57:58.199 ******* 2026-02-20 05:54:29.245692 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 05:54:29.245704 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 05:54:29.245717 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 05:54:29.245730 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:54:29.245742 | orchestrator | 2026-02-20 05:54:29.245755 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:54:29.245768 | orchestrator | Friday 20 February 2026 05:53:52 +0000 (0:00:01.421) 0:57:59.621 ******* 2026-02-20 05:54:29.245781 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 05:54:29.245793 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 05:54:29.245806 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 05:54:29.245818 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:54:29.245830 | orchestrator | 2026-02-20 05:54:29.245843 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:54:29.245856 | orchestrator | Friday 20 February 2026 05:53:53 +0000 (0:00:01.387) 0:58:01.009 ******* 2026-02-20 05:54:29.245873 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:54:29.245891 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:29.245909 | orchestrator | 2026-02-20 05:54:29.245927 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:54:29.245945 | orchestrator | Friday 20 February 2026 05:53:54 +0000 (0:00:01.266) 0:58:02.275 ******* 2026-02-20 05:54:29.245962 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-20 05:54:29.245979 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-20 05:54:29.245996 | orchestrator | 2026-02-20 05:54:29.246012 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 05:54:29.246105 | orchestrator | Friday 20 February 2026 05:53:56 +0000 (0:00:01.507) 0:58:03.783 ******* 2026-02-20 05:54:29.246126 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:54:29.246146 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:29.246165 | orchestrator | 2026-02-20 05:54:29.246203 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-20 05:54:29.246215 | orchestrator | Friday 20 February 2026 05:53:58 +0000 (0:00:02.006) 0:58:05.789 ******* 2026-02-20 05:54:29.246225 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:54:29.246236 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:54:29.246247 | orchestrator | 2026-02-20 05:54:29.246258 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-20 05:54:29.246269 | orchestrator | Friday 20 February 2026 05:53:59 +0000 (0:00:01.259) 0:58:07.049 ******* 2026-02-20 05:54:29.246280 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4, testbed-node-3 2026-02-20 05:54:29.246304 | orchestrator | 2026-02-20 05:54:29.246315 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-20 05:54:29.246332 | orchestrator | Friday 20 February 2026 05:54:00 +0000 (0:00:01.227) 0:58:08.276 ******* 2026-02-20 05:54:29.246352 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-20 05:54:29.246369 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-20 05:54:29.246387 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-20 05:54:29.246405 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-20 05:54:29.246449 | orchestrator | 2026-02-20 05:54:29.246470 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-20 05:54:29.246489 | orchestrator | Friday 20 February 2026 05:54:02 +0000 (0:00:01.980) 0:58:10.256 ******* 2026-02-20 05:54:29.246509 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 05:54:29.246528 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-20 05:54:29.246546 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 05:54:29.246562 | orchestrator | 2026-02-20 05:54:29.246574 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-20 05:54:29.246585 | orchestrator | Friday 20 February 2026 05:54:06 +0000 (0:00:03.287) 0:58:13.544 ******* 2026-02-20 05:54:29.246596 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-20 05:54:29.246615 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-20 05:54:29.246627 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:54:29.246638 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-20 05:54:29.246648 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-20 05:54:29.246659 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:29.246670 | orchestrator | 2026-02-20 05:54:29.246681 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-20 05:54:29.246692 | orchestrator | Friday 20 February 2026 05:54:08 +0000 (0:00:02.101) 0:58:15.646 ******* 2026-02-20 05:54:29.246703 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:54:29.246714 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:29.246725 | orchestrator | 2026-02-20 05:54:29.246736 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-20 05:54:29.246747 | orchestrator | Friday 20 February 2026 05:54:09 +0000 (0:00:01.657) 0:58:17.303 ******* 2026-02-20 05:54:29.246757 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:54:29.246768 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:54:29.246779 | orchestrator | 2026-02-20 05:54:29.246790 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-20 05:54:29.246801 | orchestrator | Friday 20 February 2026 05:54:11 +0000 (0:00:01.299) 0:58:18.603 ******* 2026-02-20 05:54:29.246812 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4, testbed-node-3 2026-02-20 05:54:29.246823 | orchestrator | 2026-02-20 05:54:29.246834 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-20 05:54:29.246845 | orchestrator | Friday 20 February 2026 05:54:12 +0000 (0:00:01.262) 0:58:19.866 ******* 2026-02-20 05:54:29.246856 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4, testbed-node-3 2026-02-20 05:54:29.246867 | orchestrator | 2026-02-20 05:54:29.246878 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-20 05:54:29.246889 | orchestrator | Friday 20 February 2026 05:54:13 +0000 (0:00:01.272) 0:58:21.139 ******* 2026-02-20 05:54:29.246899 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:54:29.246910 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:29.246921 | orchestrator | 2026-02-20 05:54:29.246932 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-20 05:54:29.246943 | orchestrator | Friday 20 February 2026 05:54:15 +0000 (0:00:02.193) 0:58:23.333 ******* 2026-02-20 05:54:29.246954 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:54:29.246973 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:29.246984 | orchestrator | 2026-02-20 05:54:29.246995 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-20 05:54:29.247006 | orchestrator | Friday 20 February 2026 05:54:18 +0000 (0:00:02.326) 0:58:25.659 ******* 2026-02-20 05:54:29.247017 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:29.247028 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:54:29.247039 | orchestrator | 2026-02-20 05:54:29.247050 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-20 05:54:29.247061 | orchestrator | Friday 20 February 2026 05:54:20 +0000 (0:00:02.356) 0:58:28.015 ******* 2026-02-20 05:54:29.247072 | orchestrator | changed: [testbed-node-3] 2026-02-20 05:54:29.247083 | orchestrator | changed: [testbed-node-4] 2026-02-20 05:54:29.247094 | orchestrator | 2026-02-20 05:54:29.247105 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-20 05:54:29.247116 | orchestrator | Friday 20 February 2026 05:54:24 +0000 (0:00:03.521) 0:58:31.537 ******* 2026-02-20 05:54:29.247127 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:54:29.247138 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:29.247148 | orchestrator | 2026-02-20 05:54:29.247159 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-02-20 05:54:29.247170 | orchestrator | Friday 20 February 2026 05:54:25 +0000 (0:00:01.744) 0:58:33.282 ******* 2026-02-20 05:54:29.247181 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:54:29.247201 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:54:52.424346 | orchestrator | 2026-02-20 05:54:52.424514 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-20 05:54:52.424531 | orchestrator | 2026-02-20 05:54:52.424543 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:54:52.424553 | orchestrator | Friday 20 February 2026 05:54:29 +0000 (0:00:03.430) 0:58:36.712 ******* 2026-02-20 05:54:52.424564 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-20 05:54:52.424574 | orchestrator | 2026-02-20 05:54:52.424584 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 05:54:52.424594 | orchestrator | Friday 20 February 2026 05:54:30 +0000 (0:00:01.381) 0:58:38.094 ******* 2026-02-20 05:54:52.424604 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:52.424615 | orchestrator | 2026-02-20 05:54:52.424625 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 05:54:52.424635 | orchestrator | Friday 20 February 2026 05:54:32 +0000 (0:00:01.468) 0:58:39.563 ******* 2026-02-20 05:54:52.424645 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:52.424655 | orchestrator | 2026-02-20 05:54:52.424665 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:54:52.424675 | orchestrator | Friday 20 February 2026 05:54:33 +0000 (0:00:01.110) 0:58:40.673 ******* 2026-02-20 05:54:52.424686 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:52.424697 | orchestrator | 2026-02-20 05:54:52.424713 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:54:52.424730 | orchestrator | Friday 20 February 2026 05:54:34 +0000 (0:00:01.467) 0:58:42.140 ******* 2026-02-20 05:54:52.424744 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:52.424760 | orchestrator | 2026-02-20 05:54:52.424776 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 05:54:52.424786 | orchestrator | Friday 20 February 2026 05:54:35 +0000 (0:00:01.122) 0:58:43.263 ******* 2026-02-20 05:54:52.424796 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:52.424806 | orchestrator | 2026-02-20 05:54:52.424816 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 05:54:52.424826 | orchestrator | Friday 20 February 2026 05:54:36 +0000 (0:00:01.095) 0:58:44.359 ******* 2026-02-20 05:54:52.424851 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:52.424862 | orchestrator | 2026-02-20 05:54:52.424871 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 05:54:52.424903 | orchestrator | Friday 20 February 2026 05:54:38 +0000 (0:00:01.146) 0:58:45.505 ******* 2026-02-20 05:54:52.424916 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:54:52.424928 | orchestrator | 2026-02-20 05:54:52.424939 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 05:54:52.424950 | orchestrator | Friday 20 February 2026 05:54:39 +0000 (0:00:01.142) 0:58:46.647 ******* 2026-02-20 05:54:52.424961 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:52.425034 | orchestrator | 2026-02-20 05:54:52.425047 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 05:54:52.425058 | orchestrator | Friday 20 February 2026 05:54:40 +0000 (0:00:01.162) 0:58:47.810 ******* 2026-02-20 05:54:52.425074 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:54:52.425090 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:54:52.425113 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:54:52.425134 | orchestrator | 2026-02-20 05:54:52.425150 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 05:54:52.425166 | orchestrator | Friday 20 February 2026 05:54:42 +0000 (0:00:01.967) 0:58:49.778 ******* 2026-02-20 05:54:52.425181 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:54:52.425197 | orchestrator | 2026-02-20 05:54:52.425212 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 05:54:52.425228 | orchestrator | Friday 20 February 2026 05:54:43 +0000 (0:00:01.271) 0:58:51.049 ******* 2026-02-20 05:54:52.425244 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:54:52.425260 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:54:52.425277 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:54:52.425294 | orchestrator | 2026-02-20 05:54:52.425304 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 05:54:52.425314 | orchestrator | Friday 20 February 2026 05:54:46 +0000 (0:00:03.150) 0:58:54.200 ******* 2026-02-20 05:54:52.425324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-20 05:54:52.425334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-20 05:54:52.425344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-20 05:54:52.425354 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:54:52.425364 | orchestrator | 2026-02-20 05:54:52.425373 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 05:54:52.425383 | orchestrator | Friday 20 February 2026 05:54:48 +0000 (0:00:01.731) 0:58:55.932 ******* 2026-02-20 05:54:52.425422 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 05:54:52.425436 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 05:54:52.425466 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 05:54:52.425477 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:54:52.425488 | orchestrator | 2026-02-20 05:54:52.425498 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 05:54:52.425508 | orchestrator | Friday 20 February 2026 05:54:50 +0000 (0:00:01.605) 0:58:57.537 ******* 2026-02-20 05:54:52.425519 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:54:52.425543 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:54:52.425562 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:54:52.425573 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:54:52.425597 | orchestrator | 2026-02-20 05:54:52.425607 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 05:54:52.425627 | orchestrator | Friday 20 February 2026 05:54:51 +0000 (0:00:01.185) 0:58:58.723 ******* 2026-02-20 05:54:52.425639 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fa7fdf54d9d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 05:54:44.421575', 'end': '2026-02-20 05:54:44.462401', 'delta': '0:00:00.040826', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fa7fdf54d9d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 05:54:52.425652 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9edb2d04dfb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 05:54:44.947539', 'end': '2026-02-20 05:54:44.989486', 'delta': '0:00:00.041947', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9edb2d04dfb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 05:54:52.425662 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '18841505eff4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 05:54:45.544441', 'end': '2026-02-20 05:54:45.591934', 'delta': '0:00:00.047493', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['18841505eff4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 05:54:52.425672 | orchestrator | 2026-02-20 05:54:52.425689 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 05:55:09.960587 | orchestrator | Friday 20 February 2026 05:54:52 +0000 (0:00:01.171) 0:58:59.895 ******* 2026-02-20 05:55:09.960737 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:55:09.960767 | orchestrator | 2026-02-20 05:55:09.960786 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 05:55:09.960804 | orchestrator | Friday 20 February 2026 05:54:53 +0000 (0:00:01.250) 0:59:01.146 ******* 2026-02-20 05:55:09.960823 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:09.960843 | orchestrator | 2026-02-20 05:55:09.960863 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 05:55:09.960883 | orchestrator | Friday 20 February 2026 05:54:54 +0000 (0:00:01.211) 0:59:02.358 ******* 2026-02-20 05:55:09.960902 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:55:09.960922 | orchestrator | 2026-02-20 05:55:09.960942 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 05:55:09.960959 | orchestrator | Friday 20 February 2026 05:54:56 +0000 (0:00:01.186) 0:59:03.545 ******* 2026-02-20 05:55:09.960970 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:55:09.960981 | orchestrator | 2026-02-20 05:55:09.960993 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:55:09.961004 | orchestrator | Friday 20 February 2026 05:54:58 +0000 (0:00:01.980) 0:59:05.525 ******* 2026-02-20 05:55:09.961015 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:55:09.961027 | orchestrator | 2026-02-20 05:55:09.961038 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 05:55:09.961049 | orchestrator | Friday 20 February 2026 05:54:59 +0000 (0:00:01.137) 0:59:06.663 ******* 2026-02-20 05:55:09.961064 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:09.961083 | orchestrator | 2026-02-20 05:55:09.961101 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 05:55:09.961122 | orchestrator | Friday 20 February 2026 05:55:00 +0000 (0:00:01.142) 0:59:07.806 ******* 2026-02-20 05:55:09.961141 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:09.961161 | orchestrator | 2026-02-20 05:55:09.961205 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:55:09.961232 | orchestrator | Friday 20 February 2026 05:55:01 +0000 (0:00:01.195) 0:59:09.002 ******* 2026-02-20 05:55:09.961250 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:09.961269 | orchestrator | 2026-02-20 05:55:09.961286 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 05:55:09.961304 | orchestrator | Friday 20 February 2026 05:55:02 +0000 (0:00:01.136) 0:59:10.138 ******* 2026-02-20 05:55:09.961324 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:09.961342 | orchestrator | 2026-02-20 05:55:09.961360 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 05:55:09.961412 | orchestrator | Friday 20 February 2026 05:55:03 +0000 (0:00:01.120) 0:59:11.259 ******* 2026-02-20 05:55:09.961432 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:55:09.961444 | orchestrator | 2026-02-20 05:55:09.961455 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 05:55:09.961466 | orchestrator | Friday 20 February 2026 05:55:04 +0000 (0:00:01.147) 0:59:12.406 ******* 2026-02-20 05:55:09.961477 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:09.961489 | orchestrator | 2026-02-20 05:55:09.961500 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 05:55:09.961546 | orchestrator | Friday 20 February 2026 05:55:06 +0000 (0:00:01.315) 0:59:13.723 ******* 2026-02-20 05:55:09.961570 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:55:09.961582 | orchestrator | 2026-02-20 05:55:09.961593 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 05:55:09.961604 | orchestrator | Friday 20 February 2026 05:55:07 +0000 (0:00:01.152) 0:59:14.875 ******* 2026-02-20 05:55:09.961616 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:09.961627 | orchestrator | 2026-02-20 05:55:09.961638 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 05:55:09.961675 | orchestrator | Friday 20 February 2026 05:55:08 +0000 (0:00:01.141) 0:59:16.016 ******* 2026-02-20 05:55:09.961687 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:55:09.961698 | orchestrator | 2026-02-20 05:55:09.961709 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 05:55:09.961720 | orchestrator | Friday 20 February 2026 05:55:09 +0000 (0:00:01.190) 0:59:17.206 ******* 2026-02-20 05:55:09.961734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:55:09.961751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2', 'dm-uuid-LVM-rogr1DxRDGs07Ii1eVUD20MBBAvjpXWjQ80pE2aubUFxb6wQZQZD6n80uN1wncJx'], 'uuids': ['22c82636-cfd1-4dcd-a18c-9fa46a681fb3'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '65f3eac9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx']}})  2026-02-20 05:55:09.961787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25', 'scsi-SQEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '072c6774', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:55:09.961801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-8hqq24-gM8B-HGCg-NRBz-x5vV-O3o1-WzQB2w', 'scsi-0QEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737', 'scsi-SQEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4d1d8767', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f']}})  2026-02-20 05:55:09.961825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:55:09.961855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:55:09.961881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 05:55:09.961915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:55:09.961936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6', 'dm-uuid-CRYPT-LUKS2-6ffa85ca31b34ffaa66b3499bdbb76c6-jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 05:55:09.961971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:55:11.288697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f', 'dm-uuid-LVM-N05Drt313VJO8ej73Med2uhl7Q3NfCOLjUHptgJks7soG8vnX2SubfjS5bfpvFW6'], 'uuids': ['6ffa85ca-31b3-4ffa-a66b-3499bdbb76c6'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4d1d8767', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6']}})  2026-02-20 05:55:11.288827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-22rEKG-WFfn-O0Qj-YL45-EVdC-t8yv-UVN2TG', 'scsi-0QEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2', 'scsi-SQEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '65f3eac9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2']}})  2026-02-20 05:55:11.288850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:55:11.288893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd0ac2488', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:55:11.288954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:55:11.288978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:55:11.288999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx', 'dm-uuid-CRYPT-LUKS2-22c82636cfd14dcda18c9fa46a681fb3-Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 05:55:11.289012 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:11.289025 | orchestrator | 2026-02-20 05:55:11.289038 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 05:55:11.289051 | orchestrator | Friday 20 February 2026 05:55:11 +0000 (0:00:01.358) 0:59:18.564 ******* 2026-02-20 05:55:11.289065 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:11.289091 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2', 'dm-uuid-LVM-rogr1DxRDGs07Ii1eVUD20MBBAvjpXWjQ80pE2aubUFxb6wQZQZD6n80uN1wncJx'], 'uuids': ['22c82636-cfd1-4dcd-a18c-9fa46a681fb3'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '65f3eac9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:11.289105 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25', 'scsi-SQEMU_QEMU_HARDDISK_072c6774-113a-4ca1-a8e7-4c165b03fe25'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '072c6774', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:11.289129 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-8hqq24-gM8B-HGCg-NRBz-x5vV-O3o1-WzQB2w', 'scsi-0QEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737', 'scsi-SQEMU_QEMU_HARDDISK_4d1d8767-3f9a-4df7-a383-889dd3aae737'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4d1d8767', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:12.456902 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:12.457006 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:12.457048 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:12.457058 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:12.457066 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6', 'dm-uuid-CRYPT-LUKS2-6ffa85ca31b34ffaa66b3499bdbb76c6-jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:12.457075 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:12.457104 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--59fbb122--dcd4--5ddb--8fde--378adfe4b14f-osd--block--59fbb122--dcd4--5ddb--8fde--378adfe4b14f', 'dm-uuid-LVM-N05Drt313VJO8ej73Med2uhl7Q3NfCOLjUHptgJks7soG8vnX2SubfjS5bfpvFW6'], 'uuids': ['6ffa85ca-31b3-4ffa-a66b-3499bdbb76c6'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4d1d8767', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jUHptg-Jks7-soG8-vnX2-Subf-jS5b-fpvFW6']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:12.457121 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-22rEKG-WFfn-O0Qj-YL45-EVdC-t8yv-UVN2TG', 'scsi-0QEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2', 'scsi-SQEMU_QEMU_HARDDISK_65f3eac9-8b9d-466c-8b14-5677fbc93ea2'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '65f3eac9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dc3a4123--87de--5eee--bc1c--01eb52a96fe2-osd--block--dc3a4123--87de--5eee--bc1c--01eb52a96fe2']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:12.457133 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:12.457154 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd0ac2488', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1', 'scsi-SQEMU_QEMU_HARDDISK_d0ac2488-5320-49f6-a574-46dd8e496aa4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:40.798686 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:40.798766 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:40.798774 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx', 'dm-uuid-CRYPT-LUKS2-22c82636cfd14dcda18c9fa46a681fb3-Q80pE2-aubU-Fxb6-wQZQ-ZD6n-80uN-1wncJx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:55:40.798779 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:40.798784 | orchestrator | 2026-02-20 05:55:40.798789 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 05:55:40.798794 | orchestrator | Friday 20 February 2026 05:55:12 +0000 (0:00:01.369) 0:59:19.934 ******* 2026-02-20 05:55:40.798798 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:55:40.798803 | orchestrator | 2026-02-20 05:55:40.798807 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 05:55:40.798811 | orchestrator | Friday 20 February 2026 05:55:13 +0000 (0:00:01.476) 0:59:21.411 ******* 2026-02-20 05:55:40.798815 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:55:40.798819 | orchestrator | 2026-02-20 05:55:40.798823 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:55:40.798827 | orchestrator | Friday 20 February 2026 05:55:15 +0000 (0:00:01.105) 0:59:22.517 ******* 2026-02-20 05:55:40.798830 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:55:40.798834 | orchestrator | 2026-02-20 05:55:40.798838 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:55:40.798842 | orchestrator | Friday 20 February 2026 05:55:16 +0000 (0:00:01.529) 0:59:24.047 ******* 2026-02-20 05:55:40.798846 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:40.798849 | orchestrator | 2026-02-20 05:55:40.798853 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 05:55:40.798857 | orchestrator | Friday 20 February 2026 05:55:17 +0000 (0:00:01.146) 0:59:25.194 ******* 2026-02-20 05:55:40.798861 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:40.798865 | orchestrator | 2026-02-20 05:55:40.798869 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 05:55:40.798872 | orchestrator | Friday 20 February 2026 05:55:19 +0000 (0:00:01.310) 0:59:26.504 ******* 2026-02-20 05:55:40.798893 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:40.798897 | orchestrator | 2026-02-20 05:55:40.798902 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 05:55:40.798908 | orchestrator | Friday 20 February 2026 05:55:20 +0000 (0:00:01.190) 0:59:27.695 ******* 2026-02-20 05:55:40.798914 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-20 05:55:40.798921 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-20 05:55:40.798927 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-20 05:55:40.798934 | orchestrator | 2026-02-20 05:55:40.798942 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 05:55:40.798951 | orchestrator | Friday 20 February 2026 05:55:22 +0000 (0:00:02.117) 0:59:29.813 ******* 2026-02-20 05:55:40.798957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-20 05:55:40.798964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-20 05:55:40.798969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-20 05:55:40.798987 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:40.798994 | orchestrator | 2026-02-20 05:55:40.798999 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 05:55:40.799005 | orchestrator | Friday 20 February 2026 05:55:23 +0000 (0:00:01.177) 0:59:30.991 ******* 2026-02-20 05:55:40.799023 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-20 05:55:40.799031 | orchestrator | 2026-02-20 05:55:40.799038 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:55:40.799045 | orchestrator | Friday 20 February 2026 05:55:24 +0000 (0:00:01.108) 0:59:32.100 ******* 2026-02-20 05:55:40.799051 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:40.799056 | orchestrator | 2026-02-20 05:55:40.799063 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:55:40.799070 | orchestrator | Friday 20 February 2026 05:55:25 +0000 (0:00:01.120) 0:59:33.220 ******* 2026-02-20 05:55:40.799076 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:40.799083 | orchestrator | 2026-02-20 05:55:40.799087 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:55:40.799090 | orchestrator | Friday 20 February 2026 05:55:26 +0000 (0:00:01.125) 0:59:34.345 ******* 2026-02-20 05:55:40.799094 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:40.799098 | orchestrator | 2026-02-20 05:55:40.799102 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:55:40.799106 | orchestrator | Friday 20 February 2026 05:55:27 +0000 (0:00:01.123) 0:59:35.469 ******* 2026-02-20 05:55:40.799109 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:55:40.799113 | orchestrator | 2026-02-20 05:55:40.799117 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:55:40.799121 | orchestrator | Friday 20 February 2026 05:55:29 +0000 (0:00:01.208) 0:59:36.678 ******* 2026-02-20 05:55:40.799125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 05:55:40.799129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 05:55:40.799132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 05:55:40.799136 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:40.799140 | orchestrator | 2026-02-20 05:55:40.799144 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:55:40.799148 | orchestrator | Friday 20 February 2026 05:55:30 +0000 (0:00:01.393) 0:59:38.072 ******* 2026-02-20 05:55:40.799151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 05:55:40.799155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 05:55:40.799159 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 05:55:40.799163 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:40.799167 | orchestrator | 2026-02-20 05:55:40.799176 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:55:40.799180 | orchestrator | Friday 20 February 2026 05:55:31 +0000 (0:00:01.367) 0:59:39.440 ******* 2026-02-20 05:55:40.799184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 05:55:40.799187 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 05:55:40.799191 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 05:55:40.799195 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:55:40.799199 | orchestrator | 2026-02-20 05:55:40.799203 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:55:40.799206 | orchestrator | Friday 20 February 2026 05:55:33 +0000 (0:00:01.362) 0:59:40.802 ******* 2026-02-20 05:55:40.799210 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:55:40.799214 | orchestrator | 2026-02-20 05:55:40.799218 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:55:40.799222 | orchestrator | Friday 20 February 2026 05:55:34 +0000 (0:00:01.131) 0:59:41.934 ******* 2026-02-20 05:55:40.799225 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-20 05:55:40.799229 | orchestrator | 2026-02-20 05:55:40.799233 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 05:55:40.799237 | orchestrator | Friday 20 February 2026 05:55:35 +0000 (0:00:01.331) 0:59:43.266 ******* 2026-02-20 05:55:40.799240 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:55:40.799244 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:55:40.799248 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:55:40.799252 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-20 05:55:40.799256 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:55:40.799261 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:55:40.799265 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:55:40.799269 | orchestrator | 2026-02-20 05:55:40.799273 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 05:55:40.799278 | orchestrator | Friday 20 February 2026 05:55:37 +0000 (0:00:02.101) 0:59:45.367 ******* 2026-02-20 05:55:40.799282 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:55:40.799286 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:55:40.799291 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:55:40.799295 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-20 05:55:40.799299 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 05:55:40.799307 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 05:55:40.799312 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 05:55:40.799316 | orchestrator | 2026-02-20 05:55:40.799324 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-20 05:56:33.292927 | orchestrator | Friday 20 February 2026 05:55:40 +0000 (0:00:02.900) 0:59:48.268 ******* 2026-02-20 05:56:33.293038 | orchestrator | changed: [testbed-node-3] 2026-02-20 05:56:33.293056 | orchestrator | 2026-02-20 05:56:33.293067 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-20 05:56:33.293078 | orchestrator | Friday 20 February 2026 05:55:43 +0000 (0:00:02.298) 0:59:50.566 ******* 2026-02-20 05:56:33.293089 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 05:56:33.293101 | orchestrator | 2026-02-20 05:56:33.293111 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-20 05:56:33.293144 | orchestrator | Friday 20 February 2026 05:55:46 +0000 (0:00:02.923) 0:59:53.490 ******* 2026-02-20 05:56:33.293155 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 05:56:33.293165 | orchestrator | 2026-02-20 05:56:33.293175 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 05:56:33.293185 | orchestrator | Friday 20 February 2026 05:55:48 +0000 (0:00:02.287) 0:59:55.777 ******* 2026-02-20 05:56:33.293195 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-20 05:56:33.293205 | orchestrator | 2026-02-20 05:56:33.293215 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 05:56:33.293225 | orchestrator | Friday 20 February 2026 05:55:49 +0000 (0:00:01.116) 0:59:56.893 ******* 2026-02-20 05:56:33.293235 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-20 05:56:33.293245 | orchestrator | 2026-02-20 05:56:33.293254 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 05:56:33.293264 | orchestrator | Friday 20 February 2026 05:55:50 +0000 (0:00:01.179) 0:59:58.072 ******* 2026-02-20 05:56:33.293325 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.293335 | orchestrator | 2026-02-20 05:56:33.293345 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 05:56:33.293356 | orchestrator | Friday 20 February 2026 05:55:51 +0000 (0:00:01.126) 0:59:59.199 ******* 2026-02-20 05:56:33.293372 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:56:33.293390 | orchestrator | 2026-02-20 05:56:33.293407 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 05:56:33.293421 | orchestrator | Friday 20 February 2026 05:55:53 +0000 (0:00:01.485) 1:00:00.685 ******* 2026-02-20 05:56:33.293437 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:56:33.293452 | orchestrator | 2026-02-20 05:56:33.293467 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 05:56:33.293483 | orchestrator | Friday 20 February 2026 05:55:54 +0000 (0:00:01.503) 1:00:02.188 ******* 2026-02-20 05:56:33.293498 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:56:33.293516 | orchestrator | 2026-02-20 05:56:33.293533 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 05:56:33.293550 | orchestrator | Friday 20 February 2026 05:55:56 +0000 (0:00:01.563) 1:00:03.751 ******* 2026-02-20 05:56:33.293566 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.293584 | orchestrator | 2026-02-20 05:56:33.293601 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 05:56:33.293618 | orchestrator | Friday 20 February 2026 05:55:57 +0000 (0:00:01.203) 1:00:04.955 ******* 2026-02-20 05:56:33.293628 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.293637 | orchestrator | 2026-02-20 05:56:33.293647 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 05:56:33.293657 | orchestrator | Friday 20 February 2026 05:55:58 +0000 (0:00:01.127) 1:00:06.082 ******* 2026-02-20 05:56:33.293667 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.293677 | orchestrator | 2026-02-20 05:56:33.293687 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 05:56:33.293696 | orchestrator | Friday 20 February 2026 05:55:59 +0000 (0:00:01.111) 1:00:07.194 ******* 2026-02-20 05:56:33.293706 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:56:33.293716 | orchestrator | 2026-02-20 05:56:33.293726 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 05:56:33.293735 | orchestrator | Friday 20 February 2026 05:56:01 +0000 (0:00:01.508) 1:00:08.703 ******* 2026-02-20 05:56:33.293745 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:56:33.293755 | orchestrator | 2026-02-20 05:56:33.293765 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 05:56:33.293774 | orchestrator | Friday 20 February 2026 05:56:02 +0000 (0:00:01.524) 1:00:10.228 ******* 2026-02-20 05:56:33.293795 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.293805 | orchestrator | 2026-02-20 05:56:33.293815 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 05:56:33.293824 | orchestrator | Friday 20 February 2026 05:56:03 +0000 (0:00:01.110) 1:00:11.339 ******* 2026-02-20 05:56:33.293834 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.293844 | orchestrator | 2026-02-20 05:56:33.293853 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 05:56:33.293863 | orchestrator | Friday 20 February 2026 05:56:04 +0000 (0:00:01.112) 1:00:12.451 ******* 2026-02-20 05:56:33.293873 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:56:33.293883 | orchestrator | 2026-02-20 05:56:33.293893 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 05:56:33.293903 | orchestrator | Friday 20 February 2026 05:56:06 +0000 (0:00:01.116) 1:00:13.568 ******* 2026-02-20 05:56:33.293912 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:56:33.293922 | orchestrator | 2026-02-20 05:56:33.293946 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 05:56:33.293957 | orchestrator | Friday 20 February 2026 05:56:07 +0000 (0:00:01.152) 1:00:14.721 ******* 2026-02-20 05:56:33.293966 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:56:33.293976 | orchestrator | 2026-02-20 05:56:33.294007 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 05:56:33.294099 | orchestrator | Friday 20 February 2026 05:56:08 +0000 (0:00:01.138) 1:00:15.859 ******* 2026-02-20 05:56:33.294117 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.294135 | orchestrator | 2026-02-20 05:56:33.294145 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 05:56:33.294155 | orchestrator | Friday 20 February 2026 05:56:09 +0000 (0:00:01.098) 1:00:16.958 ******* 2026-02-20 05:56:33.294165 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.294174 | orchestrator | 2026-02-20 05:56:33.294184 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 05:56:33.294194 | orchestrator | Friday 20 February 2026 05:56:10 +0000 (0:00:01.133) 1:00:18.092 ******* 2026-02-20 05:56:33.294203 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.294213 | orchestrator | 2026-02-20 05:56:33.294223 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 05:56:33.294232 | orchestrator | Friday 20 February 2026 05:56:11 +0000 (0:00:01.204) 1:00:19.296 ******* 2026-02-20 05:56:33.294242 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:56:33.294252 | orchestrator | 2026-02-20 05:56:33.294261 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 05:56:33.294299 | orchestrator | Friday 20 February 2026 05:56:12 +0000 (0:00:01.185) 1:00:20.481 ******* 2026-02-20 05:56:33.294309 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:56:33.294319 | orchestrator | 2026-02-20 05:56:33.294329 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 05:56:33.294338 | orchestrator | Friday 20 February 2026 05:56:14 +0000 (0:00:01.169) 1:00:21.651 ******* 2026-02-20 05:56:33.294348 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.294358 | orchestrator | 2026-02-20 05:56:33.294367 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 05:56:33.294377 | orchestrator | Friday 20 February 2026 05:56:15 +0000 (0:00:01.162) 1:00:22.814 ******* 2026-02-20 05:56:33.294386 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.294396 | orchestrator | 2026-02-20 05:56:33.294406 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 05:56:33.294416 | orchestrator | Friday 20 February 2026 05:56:16 +0000 (0:00:01.148) 1:00:23.963 ******* 2026-02-20 05:56:33.294425 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.294435 | orchestrator | 2026-02-20 05:56:33.294445 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 05:56:33.294455 | orchestrator | Friday 20 February 2026 05:56:17 +0000 (0:00:01.096) 1:00:25.059 ******* 2026-02-20 05:56:33.294473 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.294483 | orchestrator | 2026-02-20 05:56:33.294493 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 05:56:33.294503 | orchestrator | Friday 20 February 2026 05:56:18 +0000 (0:00:01.126) 1:00:26.186 ******* 2026-02-20 05:56:33.294512 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.294522 | orchestrator | 2026-02-20 05:56:33.294532 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 05:56:33.294541 | orchestrator | Friday 20 February 2026 05:56:19 +0000 (0:00:01.118) 1:00:27.304 ******* 2026-02-20 05:56:33.294553 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.294570 | orchestrator | 2026-02-20 05:56:33.294587 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 05:56:33.294603 | orchestrator | Friday 20 February 2026 05:56:21 +0000 (0:00:01.203) 1:00:28.508 ******* 2026-02-20 05:56:33.294620 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.294635 | orchestrator | 2026-02-20 05:56:33.294650 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 05:56:33.294666 | orchestrator | Friday 20 February 2026 05:56:22 +0000 (0:00:01.103) 1:00:29.611 ******* 2026-02-20 05:56:33.294684 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.294699 | orchestrator | 2026-02-20 05:56:33.294717 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 05:56:33.294733 | orchestrator | Friday 20 February 2026 05:56:23 +0000 (0:00:01.135) 1:00:30.747 ******* 2026-02-20 05:56:33.294749 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.294764 | orchestrator | 2026-02-20 05:56:33.294774 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 05:56:33.294786 | orchestrator | Friday 20 February 2026 05:56:24 +0000 (0:00:01.147) 1:00:31.894 ******* 2026-02-20 05:56:33.294823 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.294839 | orchestrator | 2026-02-20 05:56:33.294855 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 05:56:33.294873 | orchestrator | Friday 20 February 2026 05:56:25 +0000 (0:00:01.152) 1:00:33.047 ******* 2026-02-20 05:56:33.294889 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.294906 | orchestrator | 2026-02-20 05:56:33.294923 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 05:56:33.294939 | orchestrator | Friday 20 February 2026 05:56:26 +0000 (0:00:01.125) 1:00:34.172 ******* 2026-02-20 05:56:33.294953 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:56:33.294962 | orchestrator | 2026-02-20 05:56:33.294972 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 05:56:33.294982 | orchestrator | Friday 20 February 2026 05:56:27 +0000 (0:00:01.244) 1:00:35.417 ******* 2026-02-20 05:56:33.294991 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:56:33.295001 | orchestrator | 2026-02-20 05:56:33.295011 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 05:56:33.295020 | orchestrator | Friday 20 February 2026 05:56:29 +0000 (0:00:01.925) 1:00:37.343 ******* 2026-02-20 05:56:33.295030 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:56:33.295040 | orchestrator | 2026-02-20 05:56:33.295050 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 05:56:33.295067 | orchestrator | Friday 20 February 2026 05:56:32 +0000 (0:00:02.328) 1:00:39.671 ******* 2026-02-20 05:56:33.295077 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-20 05:56:33.295087 | orchestrator | 2026-02-20 05:56:33.295096 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-20 05:56:33.295116 | orchestrator | Friday 20 February 2026 05:56:33 +0000 (0:00:01.091) 1:00:40.762 ******* 2026-02-20 05:57:20.130394 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.130504 | orchestrator | 2026-02-20 05:57:20.130516 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-20 05:57:20.130543 | orchestrator | Friday 20 February 2026 05:56:34 +0000 (0:00:01.153) 1:00:41.916 ******* 2026-02-20 05:57:20.130550 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.130556 | orchestrator | 2026-02-20 05:57:20.130563 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-20 05:57:20.130569 | orchestrator | Friday 20 February 2026 05:56:35 +0000 (0:00:01.123) 1:00:43.040 ******* 2026-02-20 05:57:20.130575 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 05:57:20.130582 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 05:57:20.130589 | orchestrator | 2026-02-20 05:57:20.130595 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-20 05:57:20.130601 | orchestrator | Friday 20 February 2026 05:56:37 +0000 (0:00:01.826) 1:00:44.866 ******* 2026-02-20 05:57:20.130608 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:57:20.130627 | orchestrator | 2026-02-20 05:57:20.130640 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-20 05:57:20.130646 | orchestrator | Friday 20 February 2026 05:56:38 +0000 (0:00:01.450) 1:00:46.317 ******* 2026-02-20 05:57:20.130652 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.130659 | orchestrator | 2026-02-20 05:57:20.130667 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-20 05:57:20.130678 | orchestrator | Friday 20 February 2026 05:56:39 +0000 (0:00:01.127) 1:00:47.445 ******* 2026-02-20 05:57:20.130688 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.130698 | orchestrator | 2026-02-20 05:57:20.130708 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 05:57:20.130718 | orchestrator | Friday 20 February 2026 05:56:41 +0000 (0:00:01.197) 1:00:48.642 ******* 2026-02-20 05:57:20.130728 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.130737 | orchestrator | 2026-02-20 05:57:20.130747 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 05:57:20.130758 | orchestrator | Friday 20 February 2026 05:56:42 +0000 (0:00:01.105) 1:00:49.748 ******* 2026-02-20 05:57:20.130770 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-20 05:57:20.130783 | orchestrator | 2026-02-20 05:57:20.130790 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-20 05:57:20.130796 | orchestrator | Friday 20 February 2026 05:56:43 +0000 (0:00:01.306) 1:00:51.055 ******* 2026-02-20 05:57:20.130802 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:57:20.130808 | orchestrator | 2026-02-20 05:57:20.130815 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-20 05:57:20.130821 | orchestrator | Friday 20 February 2026 05:56:45 +0000 (0:00:01.867) 1:00:52.922 ******* 2026-02-20 05:57:20.130827 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 05:57:20.130834 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 05:57:20.130840 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 05:57:20.130846 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.130852 | orchestrator | 2026-02-20 05:57:20.130859 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-20 05:57:20.130865 | orchestrator | Friday 20 February 2026 05:56:46 +0000 (0:00:01.135) 1:00:54.058 ******* 2026-02-20 05:57:20.130871 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.130877 | orchestrator | 2026-02-20 05:57:20.130893 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-20 05:57:20.130899 | orchestrator | Friday 20 February 2026 05:56:47 +0000 (0:00:01.103) 1:00:55.162 ******* 2026-02-20 05:57:20.130905 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.130911 | orchestrator | 2026-02-20 05:57:20.130925 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-20 05:57:20.130942 | orchestrator | Friday 20 February 2026 05:56:48 +0000 (0:00:01.172) 1:00:56.334 ******* 2026-02-20 05:57:20.130949 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.130957 | orchestrator | 2026-02-20 05:57:20.130964 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-20 05:57:20.130971 | orchestrator | Friday 20 February 2026 05:56:49 +0000 (0:00:01.132) 1:00:57.467 ******* 2026-02-20 05:57:20.130978 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.130985 | orchestrator | 2026-02-20 05:57:20.130992 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-20 05:57:20.130999 | orchestrator | Friday 20 February 2026 05:56:51 +0000 (0:00:01.171) 1:00:58.638 ******* 2026-02-20 05:57:20.131006 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.131013 | orchestrator | 2026-02-20 05:57:20.131020 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 05:57:20.131027 | orchestrator | Friday 20 February 2026 05:56:52 +0000 (0:00:01.121) 1:00:59.760 ******* 2026-02-20 05:57:20.131034 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:57:20.131041 | orchestrator | 2026-02-20 05:57:20.131048 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 05:57:20.131054 | orchestrator | Friday 20 February 2026 05:56:54 +0000 (0:00:02.522) 1:01:02.282 ******* 2026-02-20 05:57:20.131063 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:57:20.131074 | orchestrator | 2026-02-20 05:57:20.131086 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 05:57:20.131113 | orchestrator | Friday 20 February 2026 05:56:55 +0000 (0:00:01.141) 1:01:03.424 ******* 2026-02-20 05:57:20.131121 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-20 05:57:20.131128 | orchestrator | 2026-02-20 05:57:20.131136 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-20 05:57:20.131158 | orchestrator | Friday 20 February 2026 05:56:57 +0000 (0:00:01.129) 1:01:04.553 ******* 2026-02-20 05:57:20.131165 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.131172 | orchestrator | 2026-02-20 05:57:20.131180 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-20 05:57:20.131187 | orchestrator | Friday 20 February 2026 05:56:58 +0000 (0:00:01.268) 1:01:05.821 ******* 2026-02-20 05:57:20.131195 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.131202 | orchestrator | 2026-02-20 05:57:20.131210 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-20 05:57:20.131237 | orchestrator | Friday 20 February 2026 05:56:59 +0000 (0:00:01.140) 1:01:06.961 ******* 2026-02-20 05:57:20.131248 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.131258 | orchestrator | 2026-02-20 05:57:20.131269 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-20 05:57:20.131279 | orchestrator | Friday 20 February 2026 05:57:00 +0000 (0:00:01.143) 1:01:08.105 ******* 2026-02-20 05:57:20.131289 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.131300 | orchestrator | 2026-02-20 05:57:20.131309 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-20 05:57:20.131315 | orchestrator | Friday 20 February 2026 05:57:01 +0000 (0:00:01.112) 1:01:09.218 ******* 2026-02-20 05:57:20.131321 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.131327 | orchestrator | 2026-02-20 05:57:20.131334 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-20 05:57:20.131340 | orchestrator | Friday 20 February 2026 05:57:02 +0000 (0:00:01.154) 1:01:10.373 ******* 2026-02-20 05:57:20.131346 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.131352 | orchestrator | 2026-02-20 05:57:20.131358 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-20 05:57:20.131364 | orchestrator | Friday 20 February 2026 05:57:04 +0000 (0:00:01.129) 1:01:11.502 ******* 2026-02-20 05:57:20.131370 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.131376 | orchestrator | 2026-02-20 05:57:20.131392 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-20 05:57:20.131404 | orchestrator | Friday 20 February 2026 05:57:05 +0000 (0:00:01.123) 1:01:12.626 ******* 2026-02-20 05:57:20.131418 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:57:20.131425 | orchestrator | 2026-02-20 05:57:20.131431 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-20 05:57:20.131437 | orchestrator | Friday 20 February 2026 05:57:06 +0000 (0:00:01.143) 1:01:13.770 ******* 2026-02-20 05:57:20.131443 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:57:20.131449 | orchestrator | 2026-02-20 05:57:20.131455 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 05:57:20.131462 | orchestrator | Friday 20 February 2026 05:57:07 +0000 (0:00:01.133) 1:01:14.904 ******* 2026-02-20 05:57:20.131468 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-20 05:57:20.131474 | orchestrator | 2026-02-20 05:57:20.131480 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-20 05:57:20.131486 | orchestrator | Friday 20 February 2026 05:57:08 +0000 (0:00:01.115) 1:01:16.020 ******* 2026-02-20 05:57:20.131493 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-20 05:57:20.131499 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-20 05:57:20.131505 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-20 05:57:20.131512 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-20 05:57:20.131518 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-20 05:57:20.131524 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-20 05:57:20.131530 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-20 05:57:20.131536 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-20 05:57:20.131543 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 05:57:20.131549 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 05:57:20.131555 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 05:57:20.131561 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 05:57:20.131567 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 05:57:20.131573 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 05:57:20.131580 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-20 05:57:20.131586 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-20 05:57:20.131592 | orchestrator | 2026-02-20 05:57:20.131598 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 05:57:20.131604 | orchestrator | Friday 20 February 2026 05:57:15 +0000 (0:00:06.769) 1:01:22.789 ******* 2026-02-20 05:57:20.131610 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-20 05:57:20.131616 | orchestrator | 2026-02-20 05:57:20.131623 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-20 05:57:20.131629 | orchestrator | Friday 20 February 2026 05:57:16 +0000 (0:00:01.238) 1:01:24.028 ******* 2026-02-20 05:57:20.131635 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 05:57:20.131642 | orchestrator | 2026-02-20 05:57:20.131648 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-20 05:57:20.131659 | orchestrator | Friday 20 February 2026 05:57:18 +0000 (0:00:01.542) 1:01:25.570 ******* 2026-02-20 05:57:20.131665 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 05:57:20.131671 | orchestrator | 2026-02-20 05:57:20.131678 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 05:57:20.131689 | orchestrator | Friday 20 February 2026 05:57:20 +0000 (0:00:02.031) 1:01:27.601 ******* 2026-02-20 05:58:11.078905 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.079029 | orchestrator | 2026-02-20 05:58:11.079055 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 05:58:11.079076 | orchestrator | Friday 20 February 2026 05:57:21 +0000 (0:00:01.119) 1:01:28.721 ******* 2026-02-20 05:58:11.079094 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.079114 | orchestrator | 2026-02-20 05:58:11.079134 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 05:58:11.079154 | orchestrator | Friday 20 February 2026 05:57:22 +0000 (0:00:01.172) 1:01:29.893 ******* 2026-02-20 05:58:11.079172 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.079248 | orchestrator | 2026-02-20 05:58:11.079260 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 05:58:11.079271 | orchestrator | Friday 20 February 2026 05:57:23 +0000 (0:00:01.116) 1:01:31.009 ******* 2026-02-20 05:58:11.079282 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.079293 | orchestrator | 2026-02-20 05:58:11.079304 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 05:58:11.079316 | orchestrator | Friday 20 February 2026 05:57:24 +0000 (0:00:01.182) 1:01:32.191 ******* 2026-02-20 05:58:11.079327 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.079338 | orchestrator | 2026-02-20 05:58:11.079349 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 05:58:11.079362 | orchestrator | Friday 20 February 2026 05:57:25 +0000 (0:00:01.214) 1:01:33.406 ******* 2026-02-20 05:58:11.079373 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.079384 | orchestrator | 2026-02-20 05:58:11.079395 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 05:58:11.079406 | orchestrator | Friday 20 February 2026 05:57:27 +0000 (0:00:01.135) 1:01:34.542 ******* 2026-02-20 05:58:11.079417 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.079428 | orchestrator | 2026-02-20 05:58:11.079441 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 05:58:11.079455 | orchestrator | Friday 20 February 2026 05:57:28 +0000 (0:00:01.159) 1:01:35.702 ******* 2026-02-20 05:58:11.079468 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.079480 | orchestrator | 2026-02-20 05:58:11.079493 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 05:58:11.079506 | orchestrator | Friday 20 February 2026 05:57:29 +0000 (0:00:01.118) 1:01:36.820 ******* 2026-02-20 05:58:11.079519 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.079532 | orchestrator | 2026-02-20 05:58:11.079545 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 05:58:11.079557 | orchestrator | Friday 20 February 2026 05:57:30 +0000 (0:00:01.157) 1:01:37.977 ******* 2026-02-20 05:58:11.079575 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.079602 | orchestrator | 2026-02-20 05:58:11.079626 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 05:58:11.079645 | orchestrator | Friday 20 February 2026 05:57:31 +0000 (0:00:01.145) 1:01:39.123 ******* 2026-02-20 05:58:11.079664 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.079683 | orchestrator | 2026-02-20 05:58:11.079702 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 05:58:11.079722 | orchestrator | Friday 20 February 2026 05:57:32 +0000 (0:00:01.129) 1:01:40.252 ******* 2026-02-20 05:58:11.079742 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-20 05:58:11.079762 | orchestrator | 2026-02-20 05:58:11.079781 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 05:58:11.079802 | orchestrator | Friday 20 February 2026 05:57:37 +0000 (0:00:04.821) 1:01:45.074 ******* 2026-02-20 05:58:11.079820 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 05:58:11.079859 | orchestrator | 2026-02-20 05:58:11.079870 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 05:58:11.079881 | orchestrator | Friday 20 February 2026 05:57:38 +0000 (0:00:01.191) 1:01:46.266 ******* 2026-02-20 05:58:11.079894 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-20 05:58:11.079909 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-20 05:58:11.079921 | orchestrator | 2026-02-20 05:58:11.079932 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 05:58:11.079943 | orchestrator | Friday 20 February 2026 05:57:44 +0000 (0:00:05.261) 1:01:51.527 ******* 2026-02-20 05:58:11.079953 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.079965 | orchestrator | 2026-02-20 05:58:11.079990 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 05:58:11.080001 | orchestrator | Friday 20 February 2026 05:57:45 +0000 (0:00:01.103) 1:01:52.631 ******* 2026-02-20 05:58:11.080012 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.080023 | orchestrator | 2026-02-20 05:58:11.080034 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 05:58:11.080066 | orchestrator | Friday 20 February 2026 05:57:46 +0000 (0:00:01.123) 1:01:53.754 ******* 2026-02-20 05:58:11.080077 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.080089 | orchestrator | 2026-02-20 05:58:11.080100 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 05:58:11.080110 | orchestrator | Friday 20 February 2026 05:57:47 +0000 (0:00:01.145) 1:01:54.899 ******* 2026-02-20 05:58:11.080121 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.080132 | orchestrator | 2026-02-20 05:58:11.080143 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 05:58:11.080154 | orchestrator | Friday 20 February 2026 05:57:48 +0000 (0:00:01.182) 1:01:56.082 ******* 2026-02-20 05:58:11.080165 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.080176 | orchestrator | 2026-02-20 05:58:11.080253 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 05:58:11.080265 | orchestrator | Friday 20 February 2026 05:57:49 +0000 (0:00:01.121) 1:01:57.204 ******* 2026-02-20 05:58:11.080276 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:58:11.080287 | orchestrator | 2026-02-20 05:58:11.080298 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 05:58:11.080309 | orchestrator | Friday 20 February 2026 05:57:50 +0000 (0:00:01.225) 1:01:58.429 ******* 2026-02-20 05:58:11.080320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 05:58:11.080331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 05:58:11.080342 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 05:58:11.080353 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.080363 | orchestrator | 2026-02-20 05:58:11.080374 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 05:58:11.080385 | orchestrator | Friday 20 February 2026 05:57:52 +0000 (0:00:01.385) 1:01:59.815 ******* 2026-02-20 05:58:11.080396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 05:58:11.080407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 05:58:11.080417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 05:58:11.080439 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.080450 | orchestrator | 2026-02-20 05:58:11.080461 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 05:58:11.080472 | orchestrator | Friday 20 February 2026 05:57:54 +0000 (0:00:01.780) 1:02:01.596 ******* 2026-02-20 05:58:11.080482 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-20 05:58:11.080493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-20 05:58:11.080504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-20 05:58:11.080515 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.080525 | orchestrator | 2026-02-20 05:58:11.080536 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 05:58:11.080547 | orchestrator | Friday 20 February 2026 05:57:55 +0000 (0:00:01.753) 1:02:03.350 ******* 2026-02-20 05:58:11.080558 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:58:11.080569 | orchestrator | 2026-02-20 05:58:11.080580 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 05:58:11.080591 | orchestrator | Friday 20 February 2026 05:57:57 +0000 (0:00:01.164) 1:02:04.515 ******* 2026-02-20 05:58:11.080602 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-20 05:58:11.080613 | orchestrator | 2026-02-20 05:58:11.080623 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 05:58:11.080634 | orchestrator | Friday 20 February 2026 05:57:58 +0000 (0:00:01.338) 1:02:05.854 ******* 2026-02-20 05:58:11.080645 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:58:11.080656 | orchestrator | 2026-02-20 05:58:11.080667 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-20 05:58:11.080677 | orchestrator | Friday 20 February 2026 05:58:00 +0000 (0:00:01.759) 1:02:07.614 ******* 2026-02-20 05:58:11.080688 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-02-20 05:58:11.080699 | orchestrator | 2026-02-20 05:58:11.080710 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-20 05:58:11.080721 | orchestrator | Friday 20 February 2026 05:58:01 +0000 (0:00:01.454) 1:02:09.068 ******* 2026-02-20 05:58:11.080731 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 05:58:11.080742 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-20 05:58:11.080753 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 05:58:11.080764 | orchestrator | 2026-02-20 05:58:11.080774 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-20 05:58:11.080785 | orchestrator | Friday 20 February 2026 05:58:04 +0000 (0:00:03.261) 1:02:12.330 ******* 2026-02-20 05:58:11.080796 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-20 05:58:11.080807 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-20 05:58:11.080818 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:58:11.080829 | orchestrator | 2026-02-20 05:58:11.080840 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-20 05:58:11.080851 | orchestrator | Friday 20 February 2026 05:58:06 +0000 (0:00:01.968) 1:02:14.299 ******* 2026-02-20 05:58:11.080862 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:58:11.080872 | orchestrator | 2026-02-20 05:58:11.080887 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-20 05:58:11.080906 | orchestrator | Friday 20 February 2026 05:58:07 +0000 (0:00:01.128) 1:02:15.427 ******* 2026-02-20 05:58:11.080933 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-02-20 05:58:11.080954 | orchestrator | 2026-02-20 05:58:11.080971 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-20 05:58:11.080991 | orchestrator | Friday 20 February 2026 05:58:09 +0000 (0:00:01.453) 1:02:16.880 ******* 2026-02-20 05:58:11.081022 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 05:59:25.828278 | orchestrator | 2026-02-20 05:59:25.828367 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-20 05:59:25.828375 | orchestrator | Friday 20 February 2026 05:58:11 +0000 (0:00:01.672) 1:02:18.552 ******* 2026-02-20 05:59:25.828379 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 05:59:25.828385 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-20 05:59:25.828390 | orchestrator | 2026-02-20 05:59:25.828395 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-20 05:59:25.828399 | orchestrator | Friday 20 February 2026 05:58:16 +0000 (0:00:05.600) 1:02:24.153 ******* 2026-02-20 05:59:25.828403 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 05:59:25.828407 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 05:59:25.828412 | orchestrator | 2026-02-20 05:59:25.828416 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-20 05:59:25.828420 | orchestrator | Friday 20 February 2026 05:58:19 +0000 (0:00:03.194) 1:02:27.350 ******* 2026-02-20 05:59:25.828424 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-20 05:59:25.828428 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:59:25.828433 | orchestrator | 2026-02-20 05:59:25.828436 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-20 05:59:25.828440 | orchestrator | Friday 20 February 2026 05:58:21 +0000 (0:00:01.978) 1:02:29.329 ******* 2026-02-20 05:59:25.828444 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-20 05:59:25.828448 | orchestrator | 2026-02-20 05:59:25.828452 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-20 05:59:25.828456 | orchestrator | Friday 20 February 2026 05:58:23 +0000 (0:00:01.502) 1:02:30.832 ******* 2026-02-20 05:59:25.828460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:59:25.828465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:59:25.828469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:59:25.828473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:59:25.828477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:59:25.828481 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:59:25.828485 | orchestrator | 2026-02-20 05:59:25.828489 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-20 05:59:25.828492 | orchestrator | Friday 20 February 2026 05:58:24 +0000 (0:00:01.592) 1:02:32.424 ******* 2026-02-20 05:59:25.828496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:59:25.828500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:59:25.828504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:59:25.828508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:59:25.828512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 05:59:25.828516 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:59:25.828535 | orchestrator | 2026-02-20 05:59:25.828542 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-20 05:59:25.828548 | orchestrator | Friday 20 February 2026 05:58:26 +0000 (0:00:01.616) 1:02:34.041 ******* 2026-02-20 05:59:25.828554 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 05:59:25.828561 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 05:59:25.828567 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 05:59:25.828586 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 05:59:25.828595 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 05:59:25.828601 | orchestrator | 2026-02-20 05:59:25.828608 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-20 05:59:25.828626 | orchestrator | Friday 20 February 2026 05:58:59 +0000 (0:00:32.513) 1:03:06.554 ******* 2026-02-20 05:59:25.828631 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:59:25.828635 | orchestrator | 2026-02-20 05:59:25.828639 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-20 05:59:25.828643 | orchestrator | Friday 20 February 2026 05:59:00 +0000 (0:00:01.124) 1:03:07.679 ******* 2026-02-20 05:59:25.828646 | orchestrator | skipping: [testbed-node-3] 2026-02-20 05:59:25.828650 | orchestrator | 2026-02-20 05:59:25.828654 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-20 05:59:25.828660 | orchestrator | Friday 20 February 2026 05:59:01 +0000 (0:00:01.097) 1:03:08.777 ******* 2026-02-20 05:59:25.828666 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-02-20 05:59:25.828672 | orchestrator | 2026-02-20 05:59:25.828677 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-20 05:59:25.828683 | orchestrator | Friday 20 February 2026 05:59:02 +0000 (0:00:01.501) 1:03:10.279 ******* 2026-02-20 05:59:25.828688 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-02-20 05:59:25.828694 | orchestrator | 2026-02-20 05:59:25.828700 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-20 05:59:25.828705 | orchestrator | Friday 20 February 2026 05:59:04 +0000 (0:00:01.585) 1:03:11.864 ******* 2026-02-20 05:59:25.828712 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:59:25.828718 | orchestrator | 2026-02-20 05:59:25.828724 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-20 05:59:25.828729 | orchestrator | Friday 20 February 2026 05:59:06 +0000 (0:00:02.042) 1:03:13.907 ******* 2026-02-20 05:59:25.828735 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:59:25.828741 | orchestrator | 2026-02-20 05:59:25.828747 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-20 05:59:25.828752 | orchestrator | Friday 20 February 2026 05:59:08 +0000 (0:00:01.990) 1:03:15.897 ******* 2026-02-20 05:59:25.828758 | orchestrator | ok: [testbed-node-3] 2026-02-20 05:59:25.828763 | orchestrator | 2026-02-20 05:59:25.828769 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-20 05:59:25.828775 | orchestrator | Friday 20 February 2026 05:59:10 +0000 (0:00:02.282) 1:03:18.180 ******* 2026-02-20 05:59:25.828781 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-20 05:59:25.828787 | orchestrator | 2026-02-20 05:59:25.828793 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-20 05:59:25.828800 | orchestrator | 2026-02-20 05:59:25.828813 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 05:59:25.828819 | orchestrator | Friday 20 February 2026 05:59:13 +0000 (0:00:02.783) 1:03:20.964 ******* 2026-02-20 05:59:25.828826 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-20 05:59:25.828832 | orchestrator | 2026-02-20 05:59:25.828838 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 05:59:25.828845 | orchestrator | Friday 20 February 2026 05:59:14 +0000 (0:00:01.106) 1:03:22.070 ******* 2026-02-20 05:59:25.828851 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:59:25.828855 | orchestrator | 2026-02-20 05:59:25.828859 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 05:59:25.828864 | orchestrator | Friday 20 February 2026 05:59:16 +0000 (0:00:01.447) 1:03:23.518 ******* 2026-02-20 05:59:25.828868 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:59:25.828873 | orchestrator | 2026-02-20 05:59:25.828877 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 05:59:25.828881 | orchestrator | Friday 20 February 2026 05:59:17 +0000 (0:00:01.114) 1:03:24.633 ******* 2026-02-20 05:59:25.828886 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:59:25.828890 | orchestrator | 2026-02-20 05:59:25.828894 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 05:59:25.828899 | orchestrator | Friday 20 February 2026 05:59:18 +0000 (0:00:01.420) 1:03:26.053 ******* 2026-02-20 05:59:25.828903 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:59:25.828908 | orchestrator | 2026-02-20 05:59:25.828912 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 05:59:25.828916 | orchestrator | Friday 20 February 2026 05:59:19 +0000 (0:00:01.101) 1:03:27.155 ******* 2026-02-20 05:59:25.828921 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:59:25.828925 | orchestrator | 2026-02-20 05:59:25.828930 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 05:59:25.828934 | orchestrator | Friday 20 February 2026 05:59:20 +0000 (0:00:01.144) 1:03:28.300 ******* 2026-02-20 05:59:25.828938 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:59:25.828942 | orchestrator | 2026-02-20 05:59:25.828947 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 05:59:25.828951 | orchestrator | Friday 20 February 2026 05:59:21 +0000 (0:00:01.122) 1:03:29.422 ******* 2026-02-20 05:59:25.828956 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:59:25.828960 | orchestrator | 2026-02-20 05:59:25.828964 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 05:59:25.828969 | orchestrator | Friday 20 February 2026 05:59:23 +0000 (0:00:01.132) 1:03:30.555 ******* 2026-02-20 05:59:25.828973 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:59:25.828978 | orchestrator | 2026-02-20 05:59:25.828982 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 05:59:25.828986 | orchestrator | Friday 20 February 2026 05:59:24 +0000 (0:00:01.104) 1:03:31.659 ******* 2026-02-20 05:59:25.828995 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:59:25.829001 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:59:25.829007 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:59:25.829013 | orchestrator | 2026-02-20 05:59:25.829019 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 05:59:25.829030 | orchestrator | Friday 20 February 2026 05:59:25 +0000 (0:00:01.642) 1:03:33.302 ******* 2026-02-20 05:59:50.481785 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:59:50.481891 | orchestrator | 2026-02-20 05:59:50.481905 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 05:59:50.481916 | orchestrator | Friday 20 February 2026 05:59:27 +0000 (0:00:01.238) 1:03:34.540 ******* 2026-02-20 05:59:50.481926 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 05:59:50.481958 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 05:59:50.481967 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 05:59:50.481976 | orchestrator | 2026-02-20 05:59:50.481985 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 05:59:50.481994 | orchestrator | Friday 20 February 2026 05:59:29 +0000 (0:00:02.885) 1:03:37.426 ******* 2026-02-20 05:59:50.482003 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-20 05:59:50.482012 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-20 05:59:50.482073 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-20 05:59:50.482083 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:59:50.482092 | orchestrator | 2026-02-20 05:59:50.482101 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 05:59:50.482141 | orchestrator | Friday 20 February 2026 05:59:31 +0000 (0:00:01.436) 1:03:38.863 ******* 2026-02-20 05:59:50.482156 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 05:59:50.482168 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 05:59:50.482178 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 05:59:50.482187 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:59:50.482196 | orchestrator | 2026-02-20 05:59:50.482204 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 05:59:50.482213 | orchestrator | Friday 20 February 2026 05:59:33 +0000 (0:00:01.963) 1:03:40.827 ******* 2026-02-20 05:59:50.482224 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:59:50.482236 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:59:50.482245 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 05:59:50.482254 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:59:50.482263 | orchestrator | 2026-02-20 05:59:50.482272 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 05:59:50.482280 | orchestrator | Friday 20 February 2026 05:59:34 +0000 (0:00:01.113) 1:03:41.941 ******* 2026-02-20 05:59:50.482321 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'fa7fdf54d9d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 05:59:27.592068', 'end': '2026-02-20 05:59:27.632090', 'delta': '0:00:00.040022', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fa7fdf54d9d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 05:59:50.482343 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '9edb2d04dfb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 05:59:28.171854', 'end': '2026-02-20 05:59:28.221761', 'delta': '0:00:00.049907', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9edb2d04dfb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 05:59:50.482354 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '18841505eff4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 05:59:28.758282', 'end': '2026-02-20 05:59:28.818323', 'delta': '0:00:00.060041', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['18841505eff4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 05:59:50.482364 | orchestrator | 2026-02-20 05:59:50.482374 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 05:59:50.482385 | orchestrator | Friday 20 February 2026 05:59:35 +0000 (0:00:01.165) 1:03:43.106 ******* 2026-02-20 05:59:50.482395 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:59:50.482405 | orchestrator | 2026-02-20 05:59:50.482415 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 05:59:50.482426 | orchestrator | Friday 20 February 2026 05:59:36 +0000 (0:00:01.205) 1:03:44.312 ******* 2026-02-20 05:59:50.482436 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:59:50.482446 | orchestrator | 2026-02-20 05:59:50.482456 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 05:59:50.482466 | orchestrator | Friday 20 February 2026 05:59:38 +0000 (0:00:01.223) 1:03:45.535 ******* 2026-02-20 05:59:50.482476 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:59:50.482486 | orchestrator | 2026-02-20 05:59:50.482495 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 05:59:50.482506 | orchestrator | Friday 20 February 2026 05:59:39 +0000 (0:00:01.135) 1:03:46.671 ******* 2026-02-20 05:59:50.482516 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-20 05:59:50.482526 | orchestrator | 2026-02-20 05:59:50.482537 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:59:50.482547 | orchestrator | Friday 20 February 2026 05:59:41 +0000 (0:00:02.049) 1:03:48.720 ******* 2026-02-20 05:59:50.482557 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:59:50.482567 | orchestrator | 2026-02-20 05:59:50.482577 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 05:59:50.482587 | orchestrator | Friday 20 February 2026 05:59:42 +0000 (0:00:01.129) 1:03:49.850 ******* 2026-02-20 05:59:50.482597 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:59:50.482613 | orchestrator | 2026-02-20 05:59:50.482624 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 05:59:50.482634 | orchestrator | Friday 20 February 2026 05:59:43 +0000 (0:00:01.130) 1:03:50.980 ******* 2026-02-20 05:59:50.482644 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:59:50.482654 | orchestrator | 2026-02-20 05:59:50.482664 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 05:59:50.482674 | orchestrator | Friday 20 February 2026 05:59:44 +0000 (0:00:01.206) 1:03:52.186 ******* 2026-02-20 05:59:50.482684 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:59:50.482694 | orchestrator | 2026-02-20 05:59:50.482704 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 05:59:50.482713 | orchestrator | Friday 20 February 2026 05:59:45 +0000 (0:00:01.179) 1:03:53.366 ******* 2026-02-20 05:59:50.482721 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:59:50.482730 | orchestrator | 2026-02-20 05:59:50.482738 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 05:59:50.482747 | orchestrator | Friday 20 February 2026 05:59:46 +0000 (0:00:01.112) 1:03:54.478 ******* 2026-02-20 05:59:50.482756 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:59:50.482764 | orchestrator | 2026-02-20 05:59:50.482777 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 05:59:50.482786 | orchestrator | Friday 20 February 2026 05:59:48 +0000 (0:00:01.171) 1:03:55.650 ******* 2026-02-20 05:59:50.482795 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:59:50.482803 | orchestrator | 2026-02-20 05:59:50.482812 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 05:59:50.482821 | orchestrator | Friday 20 February 2026 05:59:49 +0000 (0:00:01.099) 1:03:56.750 ******* 2026-02-20 05:59:50.482829 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:59:50.482838 | orchestrator | 2026-02-20 05:59:50.482847 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 05:59:50.482861 | orchestrator | Friday 20 February 2026 05:59:50 +0000 (0:00:01.202) 1:03:57.952 ******* 2026-02-20 05:59:52.991582 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:59:52.991667 | orchestrator | 2026-02-20 05:59:52.991679 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 05:59:52.991689 | orchestrator | Friday 20 February 2026 05:59:51 +0000 (0:00:01.130) 1:03:59.082 ******* 2026-02-20 05:59:52.991698 | orchestrator | ok: [testbed-node-4] 2026-02-20 05:59:52.991705 | orchestrator | 2026-02-20 05:59:52.991712 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 05:59:52.991719 | orchestrator | Friday 20 February 2026 05:59:52 +0000 (0:00:01.158) 1:04:00.241 ******* 2026-02-20 05:59:52.991727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:59:52.991738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd', 'dm-uuid-LVM-uad2V0OpLGdzdU3eEHWgkaxlNp1nXa7g5o0axnC3QdSAlB8AkjlVWSH5X44uoXlU'], 'uuids': ['931641c7-2345-4218-a67b-b8fcf36da2a6'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '528e4f8d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU']}})  2026-02-20 05:59:52.991748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca', 'scsi-SQEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f09aecfd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:59:52.991778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UBLLvV-8QQ0-xzoQ-2mnT-QJVN-4rdp-m3Ln3u', 'scsi-0QEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6', 'scsi-SQEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6dc65ba2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef']}})  2026-02-20 05:59:52.991787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:59:52.991807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:59:52.991830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 05:59:52.991839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:59:52.991847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T', 'dm-uuid-CRYPT-LUKS2-7f9663ba9e0d48338edb558cf7968427-GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 05:59:52.991855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:59:52.991868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef', 'dm-uuid-LVM-eZPdyLEJUA7yM8UG04kzUJPVXCx8J9VmGmbUf4SnTTCksjLBnYIEkOOeXJAJtO0T'], 'uuids': ['7f9663ba-9e0d-4833-8edb-558cf7968427'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6dc65ba2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T']}})  2026-02-20 05:59:52.991876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2YdZI3-6CQC-uR2v-c72I-I2Jf-rZdP-m7eIYn', 'scsi-0QEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289', 'scsi-SQEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '528e4f8d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd']}})  2026-02-20 05:59:52.991887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:59:52.991902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '801ae611', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 05:59:54.299516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:59:54.299606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 05:59:54.299620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU', 'dm-uuid-CRYPT-LUKS2-931641c723454218a67bb8fcf36da2a6-5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 05:59:54.299633 | orchestrator | skipping: [testbed-node-4] 2026-02-20 05:59:54.299644 | orchestrator | 2026-02-20 05:59:54.299667 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 05:59:54.299685 | orchestrator | Friday 20 February 2026 05:59:54 +0000 (0:00:01.333) 1:04:01.575 ******* 2026-02-20 05:59:54.299720 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:59:54.299739 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd', 'dm-uuid-LVM-uad2V0OpLGdzdU3eEHWgkaxlNp1nXa7g5o0axnC3QdSAlB8AkjlVWSH5X44uoXlU'], 'uuids': ['931641c7-2345-4218-a67b-b8fcf36da2a6'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '528e4f8d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:59:54.299757 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca', 'scsi-SQEMU_QEMU_HARDDISK_f09aecfd-253e-43ea-a63d-1297b744a3ca'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f09aecfd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:59:54.299815 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UBLLvV-8QQ0-xzoQ-2mnT-QJVN-4rdp-m3Ln3u', 'scsi-0QEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6', 'scsi-SQEMU_QEMU_HARDDISK_6dc65ba2-ebcf-4c2d-a294-11a042e511e6'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6dc65ba2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef']}}, 'ansible_loop_var': 'item'})  2026-02-20 05:59:54.299828 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:59:54.299843 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:59:54.299853 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:59:54.299863 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 05:59:54.299884 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T', 'dm-uuid-CRYPT-LUKS2-7f9663ba9e0d48338edb558cf7968427-GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:00:00.124837 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:00:00.124939 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ad1d47ce--3300--5f5f--a456--60212d7294ef-osd--block--ad1d47ce--3300--5f5f--a456--60212d7294ef', 'dm-uuid-LVM-eZPdyLEJUA7yM8UG04kzUJPVXCx8J9VmGmbUf4SnTTCksjLBnYIEkOOeXJAJtO0T'], 'uuids': ['7f9663ba-9e0d-4833-8edb-558cf7968427'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6dc65ba2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['GmbUf4-SnTT-Cksj-LBnY-IEkO-OeXJ-AJtO0T']}}, 'ansible_loop_var': 'item'})  2026-02-20 06:00:00.124978 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2YdZI3-6CQC-uR2v-c72I-I2Jf-rZdP-m7eIYn', 'scsi-0QEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289', 'scsi-SQEMU_QEMU_HARDDISK_528e4f8d-abfb-4f6b-8b31-c44acf335289'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '528e4f8d', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd-osd--block--5fdd3cdc--a96e--5423--81ac--d20dc4add6fd']}}, 'ansible_loop_var': 'item'})  2026-02-20 06:00:00.125001 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:00:00.125060 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '801ae611', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1', 'scsi-SQEMU_QEMU_HARDDISK_801ae611-6693-4495-a7bb-f144e2a48178-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:00:00.125084 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:00:00.125161 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:00:00.125178 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU', 'dm-uuid-CRYPT-LUKS2-931641c723454218a67bb8fcf36da2a6-5o0axn-C3Qd-SAlB-8Akj-lVWS-H5X4-4uoXlU'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:00:00.125203 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:00:00.125216 | orchestrator | 2026-02-20 06:00:00.125229 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 06:00:00.125242 | orchestrator | Friday 20 February 2026 05:59:55 +0000 (0:00:01.873) 1:04:03.448 ******* 2026-02-20 06:00:00.125255 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:00:00.125269 | orchestrator | 2026-02-20 06:00:00.125280 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 06:00:00.125292 | orchestrator | Friday 20 February 2026 05:59:57 +0000 (0:00:01.495) 1:04:04.943 ******* 2026-02-20 06:00:00.125303 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:00:00.125315 | orchestrator | 2026-02-20 06:00:00.125326 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 06:00:00.125339 | orchestrator | Friday 20 February 2026 05:59:58 +0000 (0:00:01.130) 1:04:06.074 ******* 2026-02-20 06:00:00.125351 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:00:00.125364 | orchestrator | 2026-02-20 06:00:00.125376 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 06:00:00.125398 | orchestrator | Friday 20 February 2026 06:00:00 +0000 (0:00:01.528) 1:04:07.602 ******* 2026-02-20 06:00:40.951357 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:00:40.951464 | orchestrator | 2026-02-20 06:00:40.951479 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 06:00:40.951489 | orchestrator | Friday 20 February 2026 06:00:01 +0000 (0:00:01.116) 1:04:08.719 ******* 2026-02-20 06:00:40.951498 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:00:40.951506 | orchestrator | 2026-02-20 06:00:40.951515 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 06:00:40.951523 | orchestrator | Friday 20 February 2026 06:00:02 +0000 (0:00:01.233) 1:04:09.953 ******* 2026-02-20 06:00:40.951531 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:00:40.951539 | orchestrator | 2026-02-20 06:00:40.951547 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 06:00:40.951556 | orchestrator | Friday 20 February 2026 06:00:03 +0000 (0:00:01.160) 1:04:11.114 ******* 2026-02-20 06:00:40.951564 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-20 06:00:40.951573 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-20 06:00:40.951581 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-20 06:00:40.951589 | orchestrator | 2026-02-20 06:00:40.951597 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 06:00:40.951605 | orchestrator | Friday 20 February 2026 06:00:05 +0000 (0:00:01.713) 1:04:12.827 ******* 2026-02-20 06:00:40.951613 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-20 06:00:40.951622 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-20 06:00:40.951630 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-20 06:00:40.951638 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:00:40.951646 | orchestrator | 2026-02-20 06:00:40.951654 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 06:00:40.951663 | orchestrator | Friday 20 February 2026 06:00:06 +0000 (0:00:01.186) 1:04:14.014 ******* 2026-02-20 06:00:40.951671 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-20 06:00:40.951680 | orchestrator | 2026-02-20 06:00:40.951689 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 06:00:40.951698 | orchestrator | Friday 20 February 2026 06:00:07 +0000 (0:00:01.114) 1:04:15.129 ******* 2026-02-20 06:00:40.951728 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:00:40.951736 | orchestrator | 2026-02-20 06:00:40.951757 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 06:00:40.951765 | orchestrator | Friday 20 February 2026 06:00:08 +0000 (0:00:01.146) 1:04:16.275 ******* 2026-02-20 06:00:40.951773 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:00:40.951781 | orchestrator | 2026-02-20 06:00:40.951789 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 06:00:40.951797 | orchestrator | Friday 20 February 2026 06:00:09 +0000 (0:00:01.117) 1:04:17.393 ******* 2026-02-20 06:00:40.951805 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:00:40.951813 | orchestrator | 2026-02-20 06:00:40.951821 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 06:00:40.951829 | orchestrator | Friday 20 February 2026 06:00:11 +0000 (0:00:01.265) 1:04:18.659 ******* 2026-02-20 06:00:40.951837 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:00:40.951845 | orchestrator | 2026-02-20 06:00:40.951853 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 06:00:40.951861 | orchestrator | Friday 20 February 2026 06:00:12 +0000 (0:00:01.242) 1:04:19.902 ******* 2026-02-20 06:00:40.951869 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 06:00:40.951877 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 06:00:40.951886 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 06:00:40.951895 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:00:40.951905 | orchestrator | 2026-02-20 06:00:40.951914 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 06:00:40.951923 | orchestrator | Friday 20 February 2026 06:00:13 +0000 (0:00:01.463) 1:04:21.366 ******* 2026-02-20 06:00:40.951932 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 06:00:40.951941 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 06:00:40.951951 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 06:00:40.951960 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:00:40.951969 | orchestrator | 2026-02-20 06:00:40.951978 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 06:00:40.951988 | orchestrator | Friday 20 February 2026 06:00:15 +0000 (0:00:01.382) 1:04:22.748 ******* 2026-02-20 06:00:40.951997 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 06:00:40.952006 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 06:00:40.952016 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 06:00:40.952026 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:00:40.952034 | orchestrator | 2026-02-20 06:00:40.952042 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 06:00:40.952050 | orchestrator | Friday 20 February 2026 06:00:16 +0000 (0:00:01.362) 1:04:24.111 ******* 2026-02-20 06:00:40.952058 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:00:40.952127 | orchestrator | 2026-02-20 06:00:40.952136 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 06:00:40.952144 | orchestrator | Friday 20 February 2026 06:00:17 +0000 (0:00:01.130) 1:04:25.241 ******* 2026-02-20 06:00:40.952152 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-20 06:00:40.952159 | orchestrator | 2026-02-20 06:00:40.952167 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 06:00:40.952175 | orchestrator | Friday 20 February 2026 06:00:19 +0000 (0:00:01.337) 1:04:26.579 ******* 2026-02-20 06:00:40.952197 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 06:00:40.952205 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 06:00:40.952214 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 06:00:40.952230 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 06:00:40.952239 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-20 06:00:40.952247 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 06:00:40.952255 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 06:00:40.952263 | orchestrator | 2026-02-20 06:00:40.952270 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 06:00:40.952279 | orchestrator | Friday 20 February 2026 06:00:21 +0000 (0:00:02.261) 1:04:28.841 ******* 2026-02-20 06:00:40.952286 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 06:00:40.952294 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 06:00:40.952302 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 06:00:40.952310 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 06:00:40.952338 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-20 06:00:40.952346 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-20 06:00:40.952354 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 06:00:40.952362 | orchestrator | 2026-02-20 06:00:40.952370 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-20 06:00:40.952378 | orchestrator | Friday 20 February 2026 06:00:23 +0000 (0:00:01.977) 1:04:30.818 ******* 2026-02-20 06:00:40.952386 | orchestrator | changed: [testbed-node-4] 2026-02-20 06:00:40.952394 | orchestrator | 2026-02-20 06:00:40.952402 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-20 06:00:40.952410 | orchestrator | Friday 20 February 2026 06:00:25 +0000 (0:00:01.980) 1:04:32.798 ******* 2026-02-20 06:00:40.952423 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 06:00:40.952432 | orchestrator | 2026-02-20 06:00:40.952440 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-20 06:00:40.952448 | orchestrator | Friday 20 February 2026 06:00:27 +0000 (0:00:02.474) 1:04:35.273 ******* 2026-02-20 06:00:40.952456 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 06:00:40.952464 | orchestrator | 2026-02-20 06:00:40.952472 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 06:00:40.952479 | orchestrator | Friday 20 February 2026 06:00:29 +0000 (0:00:01.982) 1:04:37.256 ******* 2026-02-20 06:00:40.952487 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-20 06:00:40.952495 | orchestrator | 2026-02-20 06:00:40.952503 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 06:00:40.952511 | orchestrator | Friday 20 February 2026 06:00:31 +0000 (0:00:01.246) 1:04:38.502 ******* 2026-02-20 06:00:40.952519 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-20 06:00:40.952527 | orchestrator | 2026-02-20 06:00:40.952535 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 06:00:40.952543 | orchestrator | Friday 20 February 2026 06:00:32 +0000 (0:00:01.101) 1:04:39.604 ******* 2026-02-20 06:00:40.952551 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:00:40.952559 | orchestrator | 2026-02-20 06:00:40.952567 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 06:00:40.952575 | orchestrator | Friday 20 February 2026 06:00:33 +0000 (0:00:01.074) 1:04:40.678 ******* 2026-02-20 06:00:40.952583 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:00:40.952591 | orchestrator | 2026-02-20 06:00:40.952604 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 06:00:40.952612 | orchestrator | Friday 20 February 2026 06:00:34 +0000 (0:00:01.502) 1:04:42.181 ******* 2026-02-20 06:00:40.952620 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:00:40.952628 | orchestrator | 2026-02-20 06:00:40.952636 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 06:00:40.952644 | orchestrator | Friday 20 February 2026 06:00:36 +0000 (0:00:01.470) 1:04:43.651 ******* 2026-02-20 06:00:40.952652 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:00:40.952660 | orchestrator | 2026-02-20 06:00:40.952681 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 06:00:40.952689 | orchestrator | Friday 20 February 2026 06:00:37 +0000 (0:00:01.488) 1:04:45.140 ******* 2026-02-20 06:00:40.952697 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:00:40.952705 | orchestrator | 2026-02-20 06:00:40.952713 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 06:00:40.952721 | orchestrator | Friday 20 February 2026 06:00:38 +0000 (0:00:01.094) 1:04:46.235 ******* 2026-02-20 06:00:40.952729 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:00:40.952737 | orchestrator | 2026-02-20 06:00:40.952745 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 06:00:40.952753 | orchestrator | Friday 20 February 2026 06:00:39 +0000 (0:00:01.098) 1:04:47.333 ******* 2026-02-20 06:00:40.952761 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:00:40.952769 | orchestrator | 2026-02-20 06:00:40.952777 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 06:00:40.952791 | orchestrator | Friday 20 February 2026 06:00:40 +0000 (0:00:01.091) 1:04:48.424 ******* 2026-02-20 06:01:20.151521 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:01:20.151612 | orchestrator | 2026-02-20 06:01:20.151623 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 06:01:20.151632 | orchestrator | Friday 20 February 2026 06:00:42 +0000 (0:00:01.268) 1:04:49.692 ******* 2026-02-20 06:01:20.151638 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:01:20.151645 | orchestrator | 2026-02-20 06:01:20.151651 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 06:01:20.151658 | orchestrator | Friday 20 February 2026 06:00:43 +0000 (0:00:01.549) 1:04:51.242 ******* 2026-02-20 06:01:20.151664 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.151672 | orchestrator | 2026-02-20 06:01:20.151678 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 06:01:20.151684 | orchestrator | Friday 20 February 2026 06:00:44 +0000 (0:00:00.768) 1:04:52.011 ******* 2026-02-20 06:01:20.151690 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.151696 | orchestrator | 2026-02-20 06:01:20.151703 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 06:01:20.151709 | orchestrator | Friday 20 February 2026 06:00:45 +0000 (0:00:00.781) 1:04:52.792 ******* 2026-02-20 06:01:20.151715 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:01:20.151733 | orchestrator | 2026-02-20 06:01:20.151740 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 06:01:20.151747 | orchestrator | Friday 20 February 2026 06:00:46 +0000 (0:00:00.816) 1:04:53.609 ******* 2026-02-20 06:01:20.151753 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:01:20.151760 | orchestrator | 2026-02-20 06:01:20.151774 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 06:01:20.151780 | orchestrator | Friday 20 February 2026 06:00:46 +0000 (0:00:00.830) 1:04:54.440 ******* 2026-02-20 06:01:20.151786 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:01:20.151792 | orchestrator | 2026-02-20 06:01:20.151799 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 06:01:20.151805 | orchestrator | Friday 20 February 2026 06:00:47 +0000 (0:00:00.812) 1:04:55.253 ******* 2026-02-20 06:01:20.151811 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.151817 | orchestrator | 2026-02-20 06:01:20.151823 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 06:01:20.151849 | orchestrator | Friday 20 February 2026 06:00:48 +0000 (0:00:00.765) 1:04:56.019 ******* 2026-02-20 06:01:20.151856 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.151862 | orchestrator | 2026-02-20 06:01:20.151879 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 06:01:20.151885 | orchestrator | Friday 20 February 2026 06:00:49 +0000 (0:00:00.760) 1:04:56.779 ******* 2026-02-20 06:01:20.151892 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.151898 | orchestrator | 2026-02-20 06:01:20.151904 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 06:01:20.151910 | orchestrator | Friday 20 February 2026 06:00:50 +0000 (0:00:00.761) 1:04:57.541 ******* 2026-02-20 06:01:20.151916 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:01:20.151922 | orchestrator | 2026-02-20 06:01:20.151929 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 06:01:20.151935 | orchestrator | Friday 20 February 2026 06:00:50 +0000 (0:00:00.824) 1:04:58.366 ******* 2026-02-20 06:01:20.151941 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:01:20.151947 | orchestrator | 2026-02-20 06:01:20.151953 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 06:01:20.151959 | orchestrator | Friday 20 February 2026 06:00:51 +0000 (0:00:00.812) 1:04:59.178 ******* 2026-02-20 06:01:20.151966 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.151972 | orchestrator | 2026-02-20 06:01:20.151978 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 06:01:20.151984 | orchestrator | Friday 20 February 2026 06:00:52 +0000 (0:00:00.775) 1:04:59.954 ******* 2026-02-20 06:01:20.151991 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.151997 | orchestrator | 2026-02-20 06:01:20.152006 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 06:01:20.152016 | orchestrator | Friday 20 February 2026 06:00:53 +0000 (0:00:00.766) 1:05:00.720 ******* 2026-02-20 06:01:20.152069 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152081 | orchestrator | 2026-02-20 06:01:20.152093 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 06:01:20.152104 | orchestrator | Friday 20 February 2026 06:00:53 +0000 (0:00:00.758) 1:05:01.478 ******* 2026-02-20 06:01:20.152113 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152121 | orchestrator | 2026-02-20 06:01:20.152128 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 06:01:20.152136 | orchestrator | Friday 20 February 2026 06:00:54 +0000 (0:00:00.803) 1:05:02.282 ******* 2026-02-20 06:01:20.152143 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152150 | orchestrator | 2026-02-20 06:01:20.152158 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 06:01:20.152165 | orchestrator | Friday 20 February 2026 06:00:55 +0000 (0:00:00.770) 1:05:03.053 ******* 2026-02-20 06:01:20.152172 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152180 | orchestrator | 2026-02-20 06:01:20.152187 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 06:01:20.152194 | orchestrator | Friday 20 February 2026 06:00:56 +0000 (0:00:00.764) 1:05:03.817 ******* 2026-02-20 06:01:20.152201 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152208 | orchestrator | 2026-02-20 06:01:20.152216 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 06:01:20.152224 | orchestrator | Friday 20 February 2026 06:00:57 +0000 (0:00:00.740) 1:05:04.558 ******* 2026-02-20 06:01:20.152231 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152239 | orchestrator | 2026-02-20 06:01:20.152246 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 06:01:20.152253 | orchestrator | Friday 20 February 2026 06:00:57 +0000 (0:00:00.744) 1:05:05.302 ******* 2026-02-20 06:01:20.152260 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152268 | orchestrator | 2026-02-20 06:01:20.152294 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 06:01:20.152302 | orchestrator | Friday 20 February 2026 06:00:58 +0000 (0:00:00.762) 1:05:06.065 ******* 2026-02-20 06:01:20.152308 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152314 | orchestrator | 2026-02-20 06:01:20.152320 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 06:01:20.152327 | orchestrator | Friday 20 February 2026 06:00:59 +0000 (0:00:00.773) 1:05:06.838 ******* 2026-02-20 06:01:20.152333 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152339 | orchestrator | 2026-02-20 06:01:20.152345 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 06:01:20.152351 | orchestrator | Friday 20 February 2026 06:01:00 +0000 (0:00:00.759) 1:05:07.598 ******* 2026-02-20 06:01:20.152357 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152365 | orchestrator | 2026-02-20 06:01:20.152375 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 06:01:20.152384 | orchestrator | Friday 20 February 2026 06:01:00 +0000 (0:00:00.740) 1:05:08.338 ******* 2026-02-20 06:01:20.152393 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:01:20.152402 | orchestrator | 2026-02-20 06:01:20.152411 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 06:01:20.152421 | orchestrator | Friday 20 February 2026 06:01:02 +0000 (0:00:01.573) 1:05:09.912 ******* 2026-02-20 06:01:20.152430 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:01:20.152440 | orchestrator | 2026-02-20 06:01:20.152450 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 06:01:20.152459 | orchestrator | Friday 20 February 2026 06:01:04 +0000 (0:00:01.922) 1:05:11.835 ******* 2026-02-20 06:01:20.152468 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-20 06:01:20.152479 | orchestrator | 2026-02-20 06:01:20.152489 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-20 06:01:20.152499 | orchestrator | Friday 20 February 2026 06:01:05 +0000 (0:00:01.200) 1:05:13.035 ******* 2026-02-20 06:01:20.152510 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152519 | orchestrator | 2026-02-20 06:01:20.152528 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-20 06:01:20.152538 | orchestrator | Friday 20 February 2026 06:01:06 +0000 (0:00:01.112) 1:05:14.147 ******* 2026-02-20 06:01:20.152547 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152557 | orchestrator | 2026-02-20 06:01:20.152568 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-20 06:01:20.152585 | orchestrator | Friday 20 February 2026 06:01:07 +0000 (0:00:01.112) 1:05:15.260 ******* 2026-02-20 06:01:20.152596 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 06:01:20.152606 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 06:01:20.152616 | orchestrator | 2026-02-20 06:01:20.152622 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-20 06:01:20.152628 | orchestrator | Friday 20 February 2026 06:01:09 +0000 (0:00:01.839) 1:05:17.099 ******* 2026-02-20 06:01:20.152634 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:01:20.152641 | orchestrator | 2026-02-20 06:01:20.152647 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-20 06:01:20.152653 | orchestrator | Friday 20 February 2026 06:01:11 +0000 (0:00:01.478) 1:05:18.578 ******* 2026-02-20 06:01:20.152659 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152665 | orchestrator | 2026-02-20 06:01:20.152672 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-20 06:01:20.152678 | orchestrator | Friday 20 February 2026 06:01:12 +0000 (0:00:01.120) 1:05:19.698 ******* 2026-02-20 06:01:20.152684 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152690 | orchestrator | 2026-02-20 06:01:20.152696 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 06:01:20.152712 | orchestrator | Friday 20 February 2026 06:01:13 +0000 (0:00:00.793) 1:05:20.492 ******* 2026-02-20 06:01:20.152719 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152725 | orchestrator | 2026-02-20 06:01:20.152731 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 06:01:20.152737 | orchestrator | Friday 20 February 2026 06:01:13 +0000 (0:00:00.786) 1:05:21.278 ******* 2026-02-20 06:01:20.152744 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-20 06:01:20.152750 | orchestrator | 2026-02-20 06:01:20.152756 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-20 06:01:20.152762 | orchestrator | Friday 20 February 2026 06:01:14 +0000 (0:00:01.115) 1:05:22.393 ******* 2026-02-20 06:01:20.152769 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:01:20.152775 | orchestrator | 2026-02-20 06:01:20.152781 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-20 06:01:20.152787 | orchestrator | Friday 20 February 2026 06:01:16 +0000 (0:00:01.809) 1:05:24.203 ******* 2026-02-20 06:01:20.152794 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 06:01:20.152800 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 06:01:20.152806 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 06:01:20.152812 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152819 | orchestrator | 2026-02-20 06:01:20.152825 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-20 06:01:20.152831 | orchestrator | Friday 20 February 2026 06:01:17 +0000 (0:00:01.157) 1:05:25.361 ******* 2026-02-20 06:01:20.152837 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152843 | orchestrator | 2026-02-20 06:01:20.152849 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-20 06:01:20.152856 | orchestrator | Friday 20 February 2026 06:01:18 +0000 (0:00:01.103) 1:05:26.464 ******* 2026-02-20 06:01:20.152862 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:01:20.152868 | orchestrator | 2026-02-20 06:01:20.152881 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-20 06:02:02.846736 | orchestrator | Friday 20 February 2026 06:01:20 +0000 (0:00:01.162) 1:05:27.627 ******* 2026-02-20 06:02:02.846858 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.846874 | orchestrator | 2026-02-20 06:02:02.846887 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-20 06:02:02.846899 | orchestrator | Friday 20 February 2026 06:01:21 +0000 (0:00:01.124) 1:05:28.751 ******* 2026-02-20 06:02:02.846911 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.846922 | orchestrator | 2026-02-20 06:02:02.846934 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-20 06:02:02.846945 | orchestrator | Friday 20 February 2026 06:01:22 +0000 (0:00:01.153) 1:05:29.905 ******* 2026-02-20 06:02:02.846957 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.846968 | orchestrator | 2026-02-20 06:02:02.846979 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 06:02:02.847019 | orchestrator | Friday 20 February 2026 06:01:23 +0000 (0:00:00.794) 1:05:30.700 ******* 2026-02-20 06:02:02.847031 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:02:02.847043 | orchestrator | 2026-02-20 06:02:02.847055 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 06:02:02.847066 | orchestrator | Friday 20 February 2026 06:01:25 +0000 (0:00:02.105) 1:05:32.805 ******* 2026-02-20 06:02:02.847078 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:02:02.847089 | orchestrator | 2026-02-20 06:02:02.847100 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 06:02:02.847112 | orchestrator | Friday 20 February 2026 06:01:26 +0000 (0:00:00.765) 1:05:33.571 ******* 2026-02-20 06:02:02.847123 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-20 06:02:02.847157 | orchestrator | 2026-02-20 06:02:02.847169 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-20 06:02:02.847180 | orchestrator | Friday 20 February 2026 06:01:27 +0000 (0:00:01.113) 1:05:34.684 ******* 2026-02-20 06:02:02.847191 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.847203 | orchestrator | 2026-02-20 06:02:02.847214 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-20 06:02:02.847225 | orchestrator | Friday 20 February 2026 06:01:28 +0000 (0:00:01.125) 1:05:35.810 ******* 2026-02-20 06:02:02.847236 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.847249 | orchestrator | 2026-02-20 06:02:02.847276 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-20 06:02:02.847290 | orchestrator | Friday 20 February 2026 06:01:29 +0000 (0:00:01.141) 1:05:36.951 ******* 2026-02-20 06:02:02.847303 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.847316 | orchestrator | 2026-02-20 06:02:02.847330 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-20 06:02:02.847343 | orchestrator | Friday 20 February 2026 06:01:30 +0000 (0:00:01.162) 1:05:38.114 ******* 2026-02-20 06:02:02.847356 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.847370 | orchestrator | 2026-02-20 06:02:02.847384 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-20 06:02:02.847405 | orchestrator | Friday 20 February 2026 06:01:31 +0000 (0:00:01.154) 1:05:39.268 ******* 2026-02-20 06:02:02.847429 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.847456 | orchestrator | 2026-02-20 06:02:02.847475 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-20 06:02:02.847493 | orchestrator | Friday 20 February 2026 06:01:32 +0000 (0:00:01.116) 1:05:40.384 ******* 2026-02-20 06:02:02.847515 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.847536 | orchestrator | 2026-02-20 06:02:02.847558 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-20 06:02:02.847571 | orchestrator | Friday 20 February 2026 06:01:34 +0000 (0:00:01.139) 1:05:41.524 ******* 2026-02-20 06:02:02.847582 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.847593 | orchestrator | 2026-02-20 06:02:02.847604 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-20 06:02:02.847615 | orchestrator | Friday 20 February 2026 06:01:35 +0000 (0:00:01.129) 1:05:42.654 ******* 2026-02-20 06:02:02.847626 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.847637 | orchestrator | 2026-02-20 06:02:02.847648 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-20 06:02:02.847659 | orchestrator | Friday 20 February 2026 06:01:36 +0000 (0:00:01.140) 1:05:43.794 ******* 2026-02-20 06:02:02.847670 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:02:02.847681 | orchestrator | 2026-02-20 06:02:02.847692 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 06:02:02.847705 | orchestrator | Friday 20 February 2026 06:01:37 +0000 (0:00:00.772) 1:05:44.567 ******* 2026-02-20 06:02:02.847729 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-20 06:02:02.847756 | orchestrator | 2026-02-20 06:02:02.847774 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-20 06:02:02.847792 | orchestrator | Friday 20 February 2026 06:01:38 +0000 (0:00:01.137) 1:05:45.705 ******* 2026-02-20 06:02:02.847812 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-20 06:02:02.847830 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-20 06:02:02.847848 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-20 06:02:02.847868 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-20 06:02:02.847887 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-20 06:02:02.847907 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-20 06:02:02.847919 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-20 06:02:02.847942 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-20 06:02:02.847955 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 06:02:02.847965 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 06:02:02.847976 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 06:02:02.848037 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 06:02:02.848051 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 06:02:02.848063 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 06:02:02.848074 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-20 06:02:02.848086 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-20 06:02:02.848105 | orchestrator | 2026-02-20 06:02:02.848142 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 06:02:02.848162 | orchestrator | Friday 20 February 2026 06:01:44 +0000 (0:00:06.558) 1:05:52.263 ******* 2026-02-20 06:02:02.848180 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-20 06:02:02.848214 | orchestrator | 2026-02-20 06:02:02.848234 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-20 06:02:02.848252 | orchestrator | Friday 20 February 2026 06:01:45 +0000 (0:00:01.113) 1:05:53.377 ******* 2026-02-20 06:02:02.848271 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 06:02:02.848293 | orchestrator | 2026-02-20 06:02:02.848313 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-20 06:02:02.848333 | orchestrator | Friday 20 February 2026 06:01:47 +0000 (0:00:01.516) 1:05:54.893 ******* 2026-02-20 06:02:02.848352 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 06:02:02.848367 | orchestrator | 2026-02-20 06:02:02.848381 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 06:02:02.848404 | orchestrator | Friday 20 February 2026 06:01:49 +0000 (0:00:01.627) 1:05:56.521 ******* 2026-02-20 06:02:02.848430 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.848447 | orchestrator | 2026-02-20 06:02:02.848464 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 06:02:02.848484 | orchestrator | Friday 20 February 2026 06:01:49 +0000 (0:00:00.751) 1:05:57.272 ******* 2026-02-20 06:02:02.848503 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.848523 | orchestrator | 2026-02-20 06:02:02.848536 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 06:02:02.848548 | orchestrator | Friday 20 February 2026 06:01:50 +0000 (0:00:00.782) 1:05:58.055 ******* 2026-02-20 06:02:02.848559 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.848570 | orchestrator | 2026-02-20 06:02:02.848581 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 06:02:02.848592 | orchestrator | Friday 20 February 2026 06:01:51 +0000 (0:00:00.761) 1:05:58.817 ******* 2026-02-20 06:02:02.848603 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.848615 | orchestrator | 2026-02-20 06:02:02.848626 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 06:02:02.848637 | orchestrator | Friday 20 February 2026 06:01:52 +0000 (0:00:00.795) 1:05:59.613 ******* 2026-02-20 06:02:02.848648 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.848659 | orchestrator | 2026-02-20 06:02:02.848671 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 06:02:02.848682 | orchestrator | Friday 20 February 2026 06:01:52 +0000 (0:00:00.763) 1:06:00.376 ******* 2026-02-20 06:02:02.848693 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.848704 | orchestrator | 2026-02-20 06:02:02.848715 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 06:02:02.848738 | orchestrator | Friday 20 February 2026 06:01:53 +0000 (0:00:00.796) 1:06:01.173 ******* 2026-02-20 06:02:02.848749 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.848760 | orchestrator | 2026-02-20 06:02:02.848771 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 06:02:02.848782 | orchestrator | Friday 20 February 2026 06:01:54 +0000 (0:00:00.766) 1:06:01.939 ******* 2026-02-20 06:02:02.848793 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.848804 | orchestrator | 2026-02-20 06:02:02.848815 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 06:02:02.848826 | orchestrator | Friday 20 February 2026 06:01:55 +0000 (0:00:00.772) 1:06:02.711 ******* 2026-02-20 06:02:02.848837 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.848848 | orchestrator | 2026-02-20 06:02:02.848859 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 06:02:02.848870 | orchestrator | Friday 20 February 2026 06:01:56 +0000 (0:00:00.774) 1:06:03.486 ******* 2026-02-20 06:02:02.848881 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.848892 | orchestrator | 2026-02-20 06:02:02.848903 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 06:02:02.848914 | orchestrator | Friday 20 February 2026 06:01:56 +0000 (0:00:00.754) 1:06:04.241 ******* 2026-02-20 06:02:02.848925 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:02.848937 | orchestrator | 2026-02-20 06:02:02.848948 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 06:02:02.848959 | orchestrator | Friday 20 February 2026 06:01:57 +0000 (0:00:00.812) 1:06:05.053 ******* 2026-02-20 06:02:02.848970 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-20 06:02:02.848981 | orchestrator | 2026-02-20 06:02:02.849022 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 06:02:02.849034 | orchestrator | Friday 20 February 2026 06:02:02 +0000 (0:00:04.440) 1:06:09.494 ******* 2026-02-20 06:02:02.849046 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 06:02:02.849057 | orchestrator | 2026-02-20 06:02:02.849079 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 06:02:43.460496 | orchestrator | Friday 20 February 2026 06:02:02 +0000 (0:00:00.825) 1:06:10.320 ******* 2026-02-20 06:02:43.460616 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-20 06:02:43.460633 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-20 06:02:43.460641 | orchestrator | 2026-02-20 06:02:43.460648 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 06:02:43.460654 | orchestrator | Friday 20 February 2026 06:02:07 +0000 (0:00:04.729) 1:06:15.049 ******* 2026-02-20 06:02:43.460661 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:43.460668 | orchestrator | 2026-02-20 06:02:43.460674 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 06:02:43.460680 | orchestrator | Friday 20 February 2026 06:02:08 +0000 (0:00:00.801) 1:06:15.851 ******* 2026-02-20 06:02:43.460686 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:43.460694 | orchestrator | 2026-02-20 06:02:43.460701 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 06:02:43.460727 | orchestrator | Friday 20 February 2026 06:02:09 +0000 (0:00:00.772) 1:06:16.624 ******* 2026-02-20 06:02:43.460734 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:43.460741 | orchestrator | 2026-02-20 06:02:43.460747 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 06:02:43.460753 | orchestrator | Friday 20 February 2026 06:02:09 +0000 (0:00:00.777) 1:06:17.401 ******* 2026-02-20 06:02:43.460760 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:43.460766 | orchestrator | 2026-02-20 06:02:43.460776 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 06:02:43.460782 | orchestrator | Friday 20 February 2026 06:02:10 +0000 (0:00:00.811) 1:06:18.213 ******* 2026-02-20 06:02:43.460788 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:43.460794 | orchestrator | 2026-02-20 06:02:43.460799 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 06:02:43.460805 | orchestrator | Friday 20 February 2026 06:02:11 +0000 (0:00:00.798) 1:06:19.011 ******* 2026-02-20 06:02:43.460811 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:02:43.460818 | orchestrator | 2026-02-20 06:02:43.460825 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 06:02:43.460832 | orchestrator | Friday 20 February 2026 06:02:12 +0000 (0:00:00.894) 1:06:19.906 ******* 2026-02-20 06:02:43.460838 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 06:02:43.460845 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 06:02:43.460852 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 06:02:43.460858 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:43.460864 | orchestrator | 2026-02-20 06:02:43.460869 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 06:02:43.460877 | orchestrator | Friday 20 February 2026 06:02:13 +0000 (0:00:01.049) 1:06:20.955 ******* 2026-02-20 06:02:43.460884 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 06:02:43.460892 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 06:02:43.460898 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 06:02:43.460904 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:43.460910 | orchestrator | 2026-02-20 06:02:43.460917 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 06:02:43.460923 | orchestrator | Friday 20 February 2026 06:02:14 +0000 (0:00:01.074) 1:06:22.030 ******* 2026-02-20 06:02:43.460929 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-20 06:02:43.460936 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-20 06:02:43.460942 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-20 06:02:43.460948 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:43.460996 | orchestrator | 2026-02-20 06:02:43.461004 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 06:02:43.461010 | orchestrator | Friday 20 February 2026 06:02:15 +0000 (0:00:01.035) 1:06:23.065 ******* 2026-02-20 06:02:43.461016 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:02:43.461022 | orchestrator | 2026-02-20 06:02:43.461029 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 06:02:43.461035 | orchestrator | Friday 20 February 2026 06:02:16 +0000 (0:00:00.800) 1:06:23.865 ******* 2026-02-20 06:02:43.461042 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-20 06:02:43.461048 | orchestrator | 2026-02-20 06:02:43.461054 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 06:02:43.461061 | orchestrator | Friday 20 February 2026 06:02:17 +0000 (0:00:00.989) 1:06:24.855 ******* 2026-02-20 06:02:43.461067 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:02:43.461074 | orchestrator | 2026-02-20 06:02:43.461081 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-20 06:02:43.461108 | orchestrator | Friday 20 February 2026 06:02:18 +0000 (0:00:01.365) 1:06:26.220 ******* 2026-02-20 06:02:43.461116 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-02-20 06:02:43.461123 | orchestrator | 2026-02-20 06:02:43.461154 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-20 06:02:43.461162 | orchestrator | Friday 20 February 2026 06:02:19 +0000 (0:00:01.207) 1:06:27.428 ******* 2026-02-20 06:02:43.461169 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 06:02:43.461177 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-20 06:02:43.461185 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 06:02:43.461192 | orchestrator | 2026-02-20 06:02:43.461199 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-20 06:02:43.461206 | orchestrator | Friday 20 February 2026 06:02:23 +0000 (0:00:03.256) 1:06:30.685 ******* 2026-02-20 06:02:43.461214 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-20 06:02:43.461221 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-20 06:02:43.461228 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:02:43.461235 | orchestrator | 2026-02-20 06:02:43.461243 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-20 06:02:43.461250 | orchestrator | Friday 20 February 2026 06:02:25 +0000 (0:00:01.990) 1:06:32.676 ******* 2026-02-20 06:02:43.461257 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:43.461265 | orchestrator | 2026-02-20 06:02:43.461272 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-20 06:02:43.461279 | orchestrator | Friday 20 February 2026 06:02:25 +0000 (0:00:00.797) 1:06:33.473 ******* 2026-02-20 06:02:43.461285 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-02-20 06:02:43.461292 | orchestrator | 2026-02-20 06:02:43.461298 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-20 06:02:43.461304 | orchestrator | Friday 20 February 2026 06:02:27 +0000 (0:00:01.108) 1:06:34.582 ******* 2026-02-20 06:02:43.461310 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 06:02:43.461318 | orchestrator | 2026-02-20 06:02:43.461324 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-20 06:02:43.461331 | orchestrator | Friday 20 February 2026 06:02:28 +0000 (0:00:01.628) 1:06:36.211 ******* 2026-02-20 06:02:43.461342 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 06:02:43.461350 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-20 06:02:43.461356 | orchestrator | 2026-02-20 06:02:43.461363 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-20 06:02:43.461370 | orchestrator | Friday 20 February 2026 06:02:33 +0000 (0:00:05.171) 1:06:41.383 ******* 2026-02-20 06:02:43.461376 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 06:02:43.461383 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 06:02:43.461389 | orchestrator | 2026-02-20 06:02:43.461396 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-20 06:02:43.461402 | orchestrator | Friday 20 February 2026 06:02:37 +0000 (0:00:03.220) 1:06:44.605 ******* 2026-02-20 06:02:43.461408 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-20 06:02:43.461414 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:02:43.461421 | orchestrator | 2026-02-20 06:02:43.461427 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-20 06:02:43.461432 | orchestrator | Friday 20 February 2026 06:02:38 +0000 (0:00:01.635) 1:06:46.240 ******* 2026-02-20 06:02:43.461438 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-02-20 06:02:43.461451 | orchestrator | 2026-02-20 06:02:43.461457 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-20 06:02:43.461463 | orchestrator | Friday 20 February 2026 06:02:39 +0000 (0:00:01.119) 1:06:47.360 ******* 2026-02-20 06:02:43.461469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:02:43.461477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:02:43.461484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:02:43.461490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:02:43.461497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:02:43.461503 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:02:43.461510 | orchestrator | 2026-02-20 06:02:43.461516 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-20 06:02:43.461523 | orchestrator | Friday 20 February 2026 06:02:41 +0000 (0:00:01.985) 1:06:49.346 ******* 2026-02-20 06:02:43.461529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:02:43.461535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:02:43.461542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:02:43.461555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:03:50.935882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:03:50.936103 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:03:50.936122 | orchestrator | 2026-02-20 06:03:50.936134 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-20 06:03:50.936147 | orchestrator | Friday 20 February 2026 06:02:43 +0000 (0:00:01.582) 1:06:50.928 ******* 2026-02-20 06:03:50.936158 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 06:03:50.936171 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 06:03:50.936182 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 06:03:50.936193 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 06:03:50.936207 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 06:03:50.936218 | orchestrator | 2026-02-20 06:03:50.936229 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-20 06:03:50.936240 | orchestrator | Friday 20 February 2026 06:03:15 +0000 (0:00:32.488) 1:07:23.416 ******* 2026-02-20 06:03:50.936251 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:03:50.936262 | orchestrator | 2026-02-20 06:03:50.936273 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-20 06:03:50.936284 | orchestrator | Friday 20 February 2026 06:03:16 +0000 (0:00:00.769) 1:07:24.185 ******* 2026-02-20 06:03:50.936332 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:03:50.936345 | orchestrator | 2026-02-20 06:03:50.936358 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-20 06:03:50.936372 | orchestrator | Friday 20 February 2026 06:03:17 +0000 (0:00:00.746) 1:07:24.932 ******* 2026-02-20 06:03:50.936385 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-02-20 06:03:50.936399 | orchestrator | 2026-02-20 06:03:50.936411 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-20 06:03:50.936424 | orchestrator | Friday 20 February 2026 06:03:18 +0000 (0:00:01.095) 1:07:26.028 ******* 2026-02-20 06:03:50.936436 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-02-20 06:03:50.936449 | orchestrator | 2026-02-20 06:03:50.936462 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-20 06:03:50.936475 | orchestrator | Friday 20 February 2026 06:03:19 +0000 (0:00:01.095) 1:07:27.124 ******* 2026-02-20 06:03:50.936487 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:03:50.936502 | orchestrator | 2026-02-20 06:03:50.936514 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-20 06:03:50.936527 | orchestrator | Friday 20 February 2026 06:03:21 +0000 (0:00:02.036) 1:07:29.160 ******* 2026-02-20 06:03:50.936540 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:03:50.936554 | orchestrator | 2026-02-20 06:03:50.936566 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-20 06:03:50.936580 | orchestrator | Friday 20 February 2026 06:03:23 +0000 (0:00:01.941) 1:07:31.102 ******* 2026-02-20 06:03:50.936593 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:03:50.936606 | orchestrator | 2026-02-20 06:03:50.936619 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-20 06:03:50.936632 | orchestrator | Friday 20 February 2026 06:03:25 +0000 (0:00:02.223) 1:07:33.325 ******* 2026-02-20 06:03:50.936645 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-20 06:03:50.936658 | orchestrator | 2026-02-20 06:03:50.936671 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-20 06:03:50.936685 | orchestrator | 2026-02-20 06:03:50.936698 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 06:03:50.936710 | orchestrator | Friday 20 February 2026 06:03:29 +0000 (0:00:03.185) 1:07:36.511 ******* 2026-02-20 06:03:50.936721 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-20 06:03:50.936732 | orchestrator | 2026-02-20 06:03:50.936743 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-20 06:03:50.936754 | orchestrator | Friday 20 February 2026 06:03:30 +0000 (0:00:01.105) 1:07:37.617 ******* 2026-02-20 06:03:50.936764 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:03:50.936775 | orchestrator | 2026-02-20 06:03:50.936786 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-20 06:03:50.936797 | orchestrator | Friday 20 February 2026 06:03:31 +0000 (0:00:01.431) 1:07:39.048 ******* 2026-02-20 06:03:50.936808 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:03:50.936819 | orchestrator | 2026-02-20 06:03:50.936829 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 06:03:50.936840 | orchestrator | Friday 20 February 2026 06:03:32 +0000 (0:00:01.148) 1:07:40.196 ******* 2026-02-20 06:03:50.936851 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:03:50.936862 | orchestrator | 2026-02-20 06:03:50.936873 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 06:03:50.936884 | orchestrator | Friday 20 February 2026 06:03:34 +0000 (0:00:01.507) 1:07:41.704 ******* 2026-02-20 06:03:50.936917 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:03:50.936928 | orchestrator | 2026-02-20 06:03:50.936958 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-20 06:03:50.936978 | orchestrator | Friday 20 February 2026 06:03:35 +0000 (0:00:01.141) 1:07:42.845 ******* 2026-02-20 06:03:50.937009 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:03:50.937045 | orchestrator | 2026-02-20 06:03:50.937062 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-20 06:03:50.937080 | orchestrator | Friday 20 February 2026 06:03:36 +0000 (0:00:01.115) 1:07:43.961 ******* 2026-02-20 06:03:50.937097 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:03:50.937114 | orchestrator | 2026-02-20 06:03:50.937132 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-20 06:03:50.937151 | orchestrator | Friday 20 February 2026 06:03:37 +0000 (0:00:01.157) 1:07:45.119 ******* 2026-02-20 06:03:50.937169 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:03:50.937188 | orchestrator | 2026-02-20 06:03:50.937206 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-20 06:03:50.937224 | orchestrator | Friday 20 February 2026 06:03:38 +0000 (0:00:01.133) 1:07:46.252 ******* 2026-02-20 06:03:50.937242 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:03:50.937259 | orchestrator | 2026-02-20 06:03:50.937277 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-20 06:03:50.937294 | orchestrator | Friday 20 February 2026 06:03:39 +0000 (0:00:01.092) 1:07:47.345 ******* 2026-02-20 06:03:50.937313 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 06:03:50.937330 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 06:03:50.937349 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 06:03:50.937366 | orchestrator | 2026-02-20 06:03:50.937386 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-20 06:03:50.937404 | orchestrator | Friday 20 February 2026 06:03:41 +0000 (0:00:01.999) 1:07:49.344 ******* 2026-02-20 06:03:50.937424 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:03:50.937442 | orchestrator | 2026-02-20 06:03:50.937462 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-20 06:03:50.937481 | orchestrator | Friday 20 February 2026 06:03:43 +0000 (0:00:01.570) 1:07:50.914 ******* 2026-02-20 06:03:50.937510 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 06:03:50.937530 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 06:03:50.937548 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 06:03:50.937566 | orchestrator | 2026-02-20 06:03:50.937585 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-20 06:03:50.937604 | orchestrator | Friday 20 February 2026 06:03:46 +0000 (0:00:03.232) 1:07:54.147 ******* 2026-02-20 06:03:50.937623 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-20 06:03:50.937643 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-20 06:03:50.937662 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-20 06:03:50.937676 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:03:50.937687 | orchestrator | 2026-02-20 06:03:50.937698 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-20 06:03:50.937708 | orchestrator | Friday 20 February 2026 06:03:48 +0000 (0:00:01.438) 1:07:55.585 ******* 2026-02-20 06:03:50.937721 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-20 06:03:50.937735 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-20 06:03:50.937752 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-20 06:03:50.937784 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:03:50.937838 | orchestrator | 2026-02-20 06:03:50.937858 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-20 06:03:50.937876 | orchestrator | Friday 20 February 2026 06:03:49 +0000 (0:00:01.638) 1:07:57.224 ******* 2026-02-20 06:03:50.937922 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 06:03:50.937962 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:09.452731 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:09.452910 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:04:09.452941 | orchestrator | 2026-02-20 06:04:09.452963 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-20 06:04:09.452985 | orchestrator | Friday 20 February 2026 06:03:50 +0000 (0:00:01.184) 1:07:58.409 ******* 2026-02-20 06:04:09.453008 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'fa7fdf54d9d5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-20 06:03:44.294537', 'end': '2026-02-20 06:03:44.352045', 'delta': '0:00:00.057508', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fa7fdf54d9d5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-20 06:04:09.453053 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '9edb2d04dfb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-20 06:03:44.865228', 'end': '2026-02-20 06:03:44.910168', 'delta': '0:00:00.044940', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9edb2d04dfb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-20 06:04:09.453076 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '18841505eff4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-20 06:03:45.447594', 'end': '2026-02-20 06:03:45.503290', 'delta': '0:00:00.055696', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['18841505eff4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-20 06:04:09.453123 | orchestrator | 2026-02-20 06:04:09.453136 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-20 06:04:09.453149 | orchestrator | Friday 20 February 2026 06:03:52 +0000 (0:00:01.171) 1:07:59.581 ******* 2026-02-20 06:04:09.453160 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:04:09.453172 | orchestrator | 2026-02-20 06:04:09.453183 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-20 06:04:09.453194 | orchestrator | Friday 20 February 2026 06:03:53 +0000 (0:00:01.234) 1:08:00.816 ******* 2026-02-20 06:04:09.453206 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:04:09.453219 | orchestrator | 2026-02-20 06:04:09.453232 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-20 06:04:09.453245 | orchestrator | Friday 20 February 2026 06:03:54 +0000 (0:00:01.244) 1:08:02.061 ******* 2026-02-20 06:04:09.453258 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:04:09.453271 | orchestrator | 2026-02-20 06:04:09.453284 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-20 06:04:09.453298 | orchestrator | Friday 20 February 2026 06:03:55 +0000 (0:00:01.146) 1:08:03.208 ******* 2026-02-20 06:04:09.453312 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-20 06:04:09.453324 | orchestrator | 2026-02-20 06:04:09.453337 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 06:04:09.453351 | orchestrator | Friday 20 February 2026 06:03:57 +0000 (0:00:01.941) 1:08:05.149 ******* 2026-02-20 06:04:09.453364 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:04:09.453377 | orchestrator | 2026-02-20 06:04:09.453390 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-20 06:04:09.453403 | orchestrator | Friday 20 February 2026 06:03:58 +0000 (0:00:01.151) 1:08:06.301 ******* 2026-02-20 06:04:09.453435 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:04:09.453449 | orchestrator | 2026-02-20 06:04:09.453462 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-20 06:04:09.453475 | orchestrator | Friday 20 February 2026 06:04:00 +0000 (0:00:01.185) 1:08:07.487 ******* 2026-02-20 06:04:09.453489 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:04:09.453502 | orchestrator | 2026-02-20 06:04:09.453516 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-20 06:04:09.453530 | orchestrator | Friday 20 February 2026 06:04:01 +0000 (0:00:01.255) 1:08:08.742 ******* 2026-02-20 06:04:09.453541 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:04:09.453552 | orchestrator | 2026-02-20 06:04:09.453563 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-20 06:04:09.453582 | orchestrator | Friday 20 February 2026 06:04:02 +0000 (0:00:01.097) 1:08:09.839 ******* 2026-02-20 06:04:09.453601 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:04:09.453628 | orchestrator | 2026-02-20 06:04:09.453648 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-20 06:04:09.453667 | orchestrator | Friday 20 February 2026 06:04:03 +0000 (0:00:01.123) 1:08:10.963 ******* 2026-02-20 06:04:09.453684 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:04:09.453703 | orchestrator | 2026-02-20 06:04:09.453719 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-20 06:04:09.453738 | orchestrator | Friday 20 February 2026 06:04:04 +0000 (0:00:01.152) 1:08:12.116 ******* 2026-02-20 06:04:09.453756 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:04:09.453775 | orchestrator | 2026-02-20 06:04:09.453799 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-20 06:04:09.453817 | orchestrator | Friday 20 February 2026 06:04:05 +0000 (0:00:01.107) 1:08:13.223 ******* 2026-02-20 06:04:09.453857 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:04:09.453910 | orchestrator | 2026-02-20 06:04:09.453930 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-20 06:04:09.453948 | orchestrator | Friday 20 February 2026 06:04:06 +0000 (0:00:01.133) 1:08:14.357 ******* 2026-02-20 06:04:09.453966 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:04:09.453985 | orchestrator | 2026-02-20 06:04:09.454014 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-20 06:04:09.454113 | orchestrator | Friday 20 February 2026 06:04:07 +0000 (0:00:01.095) 1:08:15.452 ******* 2026-02-20 06:04:09.454135 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:04:09.454155 | orchestrator | 2026-02-20 06:04:09.454172 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-20 06:04:09.454193 | orchestrator | Friday 20 February 2026 06:04:09 +0000 (0:00:01.172) 1:08:16.625 ******* 2026-02-20 06:04:09.454214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 06:04:09.454236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2', 'dm-uuid-LVM-M2C8S8UVlhXUKOUHVYtq26o4q1qe3SqLeECzduiKtPWWIbJabB9IOMLcCGA61b8F'], 'uuids': ['81982070-0591-4c7e-bdd5-9c8a78ca773c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '35df89d5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F']}})  2026-02-20 06:04:09.454254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8', 'scsi-SQEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '71e39072', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 06:04:09.454281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-tuZx2P-aY2U-pmBQ-dfdb-DpNW-3dKP-AWDCQ4', 'scsi-0QEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57', 'scsi-SQEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7c48204c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae']}})  2026-02-20 06:04:10.613622 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 06:04:10.613705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 06:04:10.613723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-20 06:04:10.613730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 06:04:10.613734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b', 'dm-uuid-CRYPT-LUKS2-7f5d4cd4cc71449e82aac2f81f5aced6-nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 06:04:10.613738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 06:04:10.613743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae', 'dm-uuid-LVM-5ANCf8I6gapO5QY5OQELwuYZqJbHngb8nyP1EwPXTNj49fiX6eCZwsotaWM2Mv9b'], 'uuids': ['7f5d4cd4-cc71-449e-82aa-c2f81f5aced6'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7c48204c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b']}})  2026-02-20 06:04:10.613757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-4WM9YE-fUvA-MoYX-Wgng-L1JX-iz7g-FbCOv0', 'scsi-0QEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9', 'scsi-SQEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '35df89d5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2']}})  2026-02-20 06:04:10.613767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 06:04:10.613775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'be990183', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-20 06:04:10.613781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 06:04:10.613786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-20 06:04:10.613794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F', 'dm-uuid-CRYPT-LUKS2-8198207005914c7ebdd59c8a78ca773c-eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-20 06:04:10.824814 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:04:10.824962 | orchestrator | 2026-02-20 06:04:10.824977 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-20 06:04:10.824986 | orchestrator | Friday 20 February 2026 06:04:10 +0000 (0:00:01.465) 1:08:18.091 ******* 2026-02-20 06:04:10.824997 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:10.825023 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2', 'dm-uuid-LVM-M2C8S8UVlhXUKOUHVYtq26o4q1qe3SqLeECzduiKtPWWIbJabB9IOMLcCGA61b8F'], 'uuids': ['81982070-0591-4c7e-bdd5-9c8a78ca773c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '35df89d5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F']}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:10.825033 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8', 'scsi-SQEMU_QEMU_HARDDISK_71e39072-aa44-4a66-a05c-ec4b85d3c9c8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '71e39072', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:10.825041 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-tuZx2P-aY2U-pmBQ-dfdb-DpNW-3dKP-AWDCQ4', 'scsi-0QEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57', 'scsi-SQEMU_QEMU_HARDDISK_7c48204c-9f75-4242-ad46-06da15902d57'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7c48204c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae']}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:10.825066 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:10.825093 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:10.825107 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-20-01-35-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:10.825114 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:10.825121 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b', 'dm-uuid-CRYPT-LUKS2-7f5d4cd4cc71449e82aac2f81f5aced6-nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:10.825128 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:10.825140 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae-osd--block--9fd87d74--f7c4--5aa7--94da--ba8f1e0708ae', 'dm-uuid-LVM-5ANCf8I6gapO5QY5OQELwuYZqJbHngb8nyP1EwPXTNj49fiX6eCZwsotaWM2Mv9b'], 'uuids': ['7f5d4cd4-cc71-449e-82aa-c2f81f5aced6'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7c48204c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nyP1Ew-PXTN-j49f-iX6e-CZws-otaW-M2Mv9b']}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:23.544132 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-4WM9YE-fUvA-MoYX-Wgng-L1JX-iz7g-FbCOv0', 'scsi-0QEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9', 'scsi-SQEMU_QEMU_HARDDISK_35df89d5-061b-439b-8792-1a54b4ca06e9'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '35df89d5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5fe77357--4c85--56ab--aabd--7cb5a18434f2-osd--block--5fe77357--4c85--56ab--aabd--7cb5a18434f2']}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:23.544253 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:23.544276 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'be990183', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1', 'scsi-SQEMU_QEMU_HARDDISK_be990183-125b-4ff4-addd-12788a17416c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:23.544339 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:23.544379 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:23.544394 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F', 'dm-uuid-CRYPT-LUKS2-8198207005914c7ebdd59c8a78ca773c-eECzdu-iKtP-WWIb-JabB-9IOM-LcCG-A61b8F'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-20 06:04:23.544407 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:04:23.544422 | orchestrator | 2026-02-20 06:04:23.544435 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-20 06:04:23.544449 | orchestrator | Friday 20 February 2026 06:04:11 +0000 (0:00:01.381) 1:08:19.472 ******* 2026-02-20 06:04:23.544461 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:04:23.544475 | orchestrator | 2026-02-20 06:04:23.544488 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-20 06:04:23.544500 | orchestrator | Friday 20 February 2026 06:04:13 +0000 (0:00:01.523) 1:08:20.995 ******* 2026-02-20 06:04:23.544514 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:04:23.544528 | orchestrator | 2026-02-20 06:04:23.544541 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 06:04:23.544555 | orchestrator | Friday 20 February 2026 06:04:14 +0000 (0:00:01.158) 1:08:22.154 ******* 2026-02-20 06:04:23.544569 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:04:23.544582 | orchestrator | 2026-02-20 06:04:23.544595 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 06:04:23.544607 | orchestrator | Friday 20 February 2026 06:04:16 +0000 (0:00:01.464) 1:08:23.618 ******* 2026-02-20 06:04:23.544620 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:04:23.544632 | orchestrator | 2026-02-20 06:04:23.544656 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-20 06:04:23.544669 | orchestrator | Friday 20 February 2026 06:04:17 +0000 (0:00:01.108) 1:08:24.726 ******* 2026-02-20 06:04:23.544682 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:04:23.544694 | orchestrator | 2026-02-20 06:04:23.544706 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-20 06:04:23.544719 | orchestrator | Friday 20 February 2026 06:04:18 +0000 (0:00:01.206) 1:08:25.933 ******* 2026-02-20 06:04:23.544732 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:04:23.544744 | orchestrator | 2026-02-20 06:04:23.544756 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-20 06:04:23.544769 | orchestrator | Friday 20 February 2026 06:04:19 +0000 (0:00:01.169) 1:08:27.103 ******* 2026-02-20 06:04:23.544781 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-20 06:04:23.544793 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-20 06:04:23.544805 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-20 06:04:23.544817 | orchestrator | 2026-02-20 06:04:23.544830 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-20 06:04:23.544842 | orchestrator | Friday 20 February 2026 06:04:21 +0000 (0:00:01.637) 1:08:28.740 ******* 2026-02-20 06:04:23.544855 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-20 06:04:23.544900 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-20 06:04:23.544912 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-20 06:04:23.544924 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:04:23.544936 | orchestrator | 2026-02-20 06:04:23.544948 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-20 06:04:23.544961 | orchestrator | Friday 20 February 2026 06:04:22 +0000 (0:00:01.161) 1:08:29.902 ******* 2026-02-20 06:04:23.544973 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-20 06:04:23.544986 | orchestrator | 2026-02-20 06:04:23.545012 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 06:05:05.104885 | orchestrator | Friday 20 February 2026 06:04:23 +0000 (0:00:01.115) 1:08:31.018 ******* 2026-02-20 06:05:05.105010 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:05.105023 | orchestrator | 2026-02-20 06:05:05.105033 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 06:05:05.105042 | orchestrator | Friday 20 February 2026 06:04:24 +0000 (0:00:01.155) 1:08:32.174 ******* 2026-02-20 06:05:05.105050 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:05.105059 | orchestrator | 2026-02-20 06:05:05.105067 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 06:05:05.105075 | orchestrator | Friday 20 February 2026 06:04:25 +0000 (0:00:01.165) 1:08:33.339 ******* 2026-02-20 06:05:05.105083 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:05.105091 | orchestrator | 2026-02-20 06:05:05.105099 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 06:05:05.105107 | orchestrator | Friday 20 February 2026 06:04:26 +0000 (0:00:01.121) 1:08:34.461 ******* 2026-02-20 06:05:05.105116 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:05.105124 | orchestrator | 2026-02-20 06:05:05.105148 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 06:05:05.105156 | orchestrator | Friday 20 February 2026 06:04:28 +0000 (0:00:01.209) 1:08:35.671 ******* 2026-02-20 06:05:05.105170 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 06:05:05.105185 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 06:05:05.105199 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 06:05:05.105213 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:05.105226 | orchestrator | 2026-02-20 06:05:05.105239 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 06:05:05.105275 | orchestrator | Friday 20 February 2026 06:04:29 +0000 (0:00:01.339) 1:08:37.010 ******* 2026-02-20 06:05:05.105290 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 06:05:05.105304 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 06:05:05.105317 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 06:05:05.105330 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:05.105342 | orchestrator | 2026-02-20 06:05:05.105356 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 06:05:05.105370 | orchestrator | Friday 20 February 2026 06:04:30 +0000 (0:00:01.359) 1:08:38.370 ******* 2026-02-20 06:05:05.105385 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 06:05:05.105399 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 06:05:05.105413 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 06:05:05.105429 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:05.105442 | orchestrator | 2026-02-20 06:05:05.105457 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 06:05:05.105472 | orchestrator | Friday 20 February 2026 06:04:32 +0000 (0:00:01.672) 1:08:40.043 ******* 2026-02-20 06:05:05.105487 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:05.105501 | orchestrator | 2026-02-20 06:05:05.105515 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 06:05:05.105530 | orchestrator | Friday 20 February 2026 06:04:33 +0000 (0:00:01.151) 1:08:41.194 ******* 2026-02-20 06:05:05.105544 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-20 06:05:05.105559 | orchestrator | 2026-02-20 06:05:05.105573 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-20 06:05:05.105587 | orchestrator | Friday 20 February 2026 06:04:35 +0000 (0:00:01.674) 1:08:42.869 ******* 2026-02-20 06:05:05.105602 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 06:05:05.105617 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 06:05:05.105630 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 06:05:05.105644 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 06:05:05.105658 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 06:05:05.105671 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-20 06:05:05.105684 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 06:05:05.105698 | orchestrator | 2026-02-20 06:05:05.105712 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-20 06:05:05.105726 | orchestrator | Friday 20 February 2026 06:04:37 +0000 (0:00:01.776) 1:08:44.645 ******* 2026-02-20 06:05:05.105739 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-20 06:05:05.105753 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-20 06:05:05.105766 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-20 06:05:05.105779 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-20 06:05:05.105793 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-20 06:05:05.105806 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-20 06:05:05.105820 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-20 06:05:05.105857 | orchestrator | 2026-02-20 06:05:05.105871 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-20 06:05:05.105885 | orchestrator | Friday 20 February 2026 06:04:39 +0000 (0:00:02.132) 1:08:46.777 ******* 2026-02-20 06:05:05.105909 | orchestrator | changed: [testbed-node-5] 2026-02-20 06:05:05.105922 | orchestrator | 2026-02-20 06:05:05.105958 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-20 06:05:05.105973 | orchestrator | Friday 20 February 2026 06:04:41 +0000 (0:00:01.952) 1:08:48.730 ******* 2026-02-20 06:05:05.105987 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 06:05:05.106004 | orchestrator | 2026-02-20 06:05:05.106081 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-20 06:05:05.106097 | orchestrator | Friday 20 February 2026 06:04:43 +0000 (0:00:02.656) 1:08:51.387 ******* 2026-02-20 06:05:05.106112 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 06:05:05.106127 | orchestrator | 2026-02-20 06:05:05.106141 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 06:05:05.106154 | orchestrator | Friday 20 February 2026 06:04:45 +0000 (0:00:01.986) 1:08:53.373 ******* 2026-02-20 06:05:05.106169 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-20 06:05:05.106184 | orchestrator | 2026-02-20 06:05:05.106208 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 06:05:05.106223 | orchestrator | Friday 20 February 2026 06:04:46 +0000 (0:00:01.092) 1:08:54.466 ******* 2026-02-20 06:05:05.106237 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-20 06:05:05.106251 | orchestrator | 2026-02-20 06:05:05.106265 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 06:05:05.106279 | orchestrator | Friday 20 February 2026 06:04:48 +0000 (0:00:01.113) 1:08:55.579 ******* 2026-02-20 06:05:05.106292 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:05.106306 | orchestrator | 2026-02-20 06:05:05.106320 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 06:05:05.106333 | orchestrator | Friday 20 February 2026 06:04:49 +0000 (0:00:01.119) 1:08:56.699 ******* 2026-02-20 06:05:05.106346 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:05.106358 | orchestrator | 2026-02-20 06:05:05.106366 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 06:05:05.106374 | orchestrator | Friday 20 February 2026 06:04:50 +0000 (0:00:01.528) 1:08:58.228 ******* 2026-02-20 06:05:05.106382 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:05.106390 | orchestrator | 2026-02-20 06:05:05.106398 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 06:05:05.106406 | orchestrator | Friday 20 February 2026 06:04:52 +0000 (0:00:01.611) 1:08:59.839 ******* 2026-02-20 06:05:05.106414 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:05.106422 | orchestrator | 2026-02-20 06:05:05.106430 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 06:05:05.106438 | orchestrator | Friday 20 February 2026 06:04:53 +0000 (0:00:01.515) 1:09:01.355 ******* 2026-02-20 06:05:05.106446 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:05.106454 | orchestrator | 2026-02-20 06:05:05.106462 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 06:05:05.106470 | orchestrator | Friday 20 February 2026 06:04:55 +0000 (0:00:01.147) 1:09:02.502 ******* 2026-02-20 06:05:05.106478 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:05.106486 | orchestrator | 2026-02-20 06:05:05.106493 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 06:05:05.106501 | orchestrator | Friday 20 February 2026 06:04:56 +0000 (0:00:01.103) 1:09:03.606 ******* 2026-02-20 06:05:05.106509 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:05.106517 | orchestrator | 2026-02-20 06:05:05.106525 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 06:05:05.106533 | orchestrator | Friday 20 February 2026 06:04:57 +0000 (0:00:01.101) 1:09:04.708 ******* 2026-02-20 06:05:05.106549 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:05.106558 | orchestrator | 2026-02-20 06:05:05.106566 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 06:05:05.106574 | orchestrator | Friday 20 February 2026 06:04:58 +0000 (0:00:01.608) 1:09:06.317 ******* 2026-02-20 06:05:05.106582 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:05.106589 | orchestrator | 2026-02-20 06:05:05.106598 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 06:05:05.106611 | orchestrator | Friday 20 February 2026 06:05:00 +0000 (0:00:01.549) 1:09:07.866 ******* 2026-02-20 06:05:05.106625 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:05.106638 | orchestrator | 2026-02-20 06:05:05.106651 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 06:05:05.106664 | orchestrator | Friday 20 February 2026 06:05:01 +0000 (0:00:00.762) 1:09:08.629 ******* 2026-02-20 06:05:05.106679 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:05.106692 | orchestrator | 2026-02-20 06:05:05.106706 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 06:05:05.106719 | orchestrator | Friday 20 February 2026 06:05:01 +0000 (0:00:00.768) 1:09:09.397 ******* 2026-02-20 06:05:05.106733 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:05.106746 | orchestrator | 2026-02-20 06:05:05.106758 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 06:05:05.106770 | orchestrator | Friday 20 February 2026 06:05:02 +0000 (0:00:00.833) 1:09:10.231 ******* 2026-02-20 06:05:05.106783 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:05.106795 | orchestrator | 2026-02-20 06:05:05.106808 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 06:05:05.106820 | orchestrator | Friday 20 February 2026 06:05:03 +0000 (0:00:00.777) 1:09:11.009 ******* 2026-02-20 06:05:05.106942 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:05.106961 | orchestrator | 2026-02-20 06:05:05.106974 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 06:05:05.106987 | orchestrator | Friday 20 February 2026 06:05:04 +0000 (0:00:00.790) 1:09:11.800 ******* 2026-02-20 06:05:05.107000 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:05.107014 | orchestrator | 2026-02-20 06:05:05.107041 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 06:05:45.191459 | orchestrator | Friday 20 February 2026 06:05:05 +0000 (0:00:00.777) 1:09:12.577 ******* 2026-02-20 06:05:45.191578 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.191595 | orchestrator | 2026-02-20 06:05:45.191609 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 06:05:45.191621 | orchestrator | Friday 20 February 2026 06:05:05 +0000 (0:00:00.833) 1:09:13.410 ******* 2026-02-20 06:05:45.191632 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.191643 | orchestrator | 2026-02-20 06:05:45.191654 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 06:05:45.191665 | orchestrator | Friday 20 February 2026 06:05:06 +0000 (0:00:00.762) 1:09:14.173 ******* 2026-02-20 06:05:45.191676 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:45.191688 | orchestrator | 2026-02-20 06:05:45.191699 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 06:05:45.191710 | orchestrator | Friday 20 February 2026 06:05:07 +0000 (0:00:00.773) 1:09:14.946 ******* 2026-02-20 06:05:45.191721 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:45.191732 | orchestrator | 2026-02-20 06:05:45.191758 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-20 06:05:45.191771 | orchestrator | Friday 20 February 2026 06:05:08 +0000 (0:00:00.775) 1:09:15.722 ******* 2026-02-20 06:05:45.191781 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.191793 | orchestrator | 2026-02-20 06:05:45.191886 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-20 06:05:45.191900 | orchestrator | Friday 20 February 2026 06:05:08 +0000 (0:00:00.754) 1:09:16.476 ******* 2026-02-20 06:05:45.191933 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.191945 | orchestrator | 2026-02-20 06:05:45.191956 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-20 06:05:45.191967 | orchestrator | Friday 20 February 2026 06:05:09 +0000 (0:00:00.746) 1:09:17.222 ******* 2026-02-20 06:05:45.191978 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.191989 | orchestrator | 2026-02-20 06:05:45.192000 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-20 06:05:45.192011 | orchestrator | Friday 20 February 2026 06:05:10 +0000 (0:00:00.794) 1:09:18.017 ******* 2026-02-20 06:05:45.192022 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.192033 | orchestrator | 2026-02-20 06:05:45.192044 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-20 06:05:45.192055 | orchestrator | Friday 20 February 2026 06:05:11 +0000 (0:00:00.798) 1:09:18.816 ******* 2026-02-20 06:05:45.192066 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.192077 | orchestrator | 2026-02-20 06:05:45.192088 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-20 06:05:45.192099 | orchestrator | Friday 20 February 2026 06:05:12 +0000 (0:00:00.757) 1:09:19.573 ******* 2026-02-20 06:05:45.192110 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.192121 | orchestrator | 2026-02-20 06:05:45.192133 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-20 06:05:45.192144 | orchestrator | Friday 20 February 2026 06:05:12 +0000 (0:00:00.744) 1:09:20.317 ******* 2026-02-20 06:05:45.192155 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.192166 | orchestrator | 2026-02-20 06:05:45.192177 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-20 06:05:45.192189 | orchestrator | Friday 20 February 2026 06:05:13 +0000 (0:00:00.761) 1:09:21.079 ******* 2026-02-20 06:05:45.192200 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.192212 | orchestrator | 2026-02-20 06:05:45.192231 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-20 06:05:45.192250 | orchestrator | Friday 20 February 2026 06:05:14 +0000 (0:00:00.754) 1:09:21.833 ******* 2026-02-20 06:05:45.192268 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.192286 | orchestrator | 2026-02-20 06:05:45.192303 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-20 06:05:45.192321 | orchestrator | Friday 20 February 2026 06:05:15 +0000 (0:00:00.898) 1:09:22.732 ******* 2026-02-20 06:05:45.192338 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.192357 | orchestrator | 2026-02-20 06:05:45.192376 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-20 06:05:45.192395 | orchestrator | Friday 20 February 2026 06:05:16 +0000 (0:00:00.756) 1:09:23.489 ******* 2026-02-20 06:05:45.192413 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.192430 | orchestrator | 2026-02-20 06:05:45.192447 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-20 06:05:45.192464 | orchestrator | Friday 20 February 2026 06:05:16 +0000 (0:00:00.752) 1:09:24.242 ******* 2026-02-20 06:05:45.192482 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.192499 | orchestrator | 2026-02-20 06:05:45.192518 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-20 06:05:45.192537 | orchestrator | Friday 20 February 2026 06:05:17 +0000 (0:00:00.766) 1:09:25.008 ******* 2026-02-20 06:05:45.192556 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:45.192575 | orchestrator | 2026-02-20 06:05:45.192602 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-20 06:05:45.192623 | orchestrator | Friday 20 February 2026 06:05:19 +0000 (0:00:01.607) 1:09:26.616 ******* 2026-02-20 06:05:45.192641 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:45.192659 | orchestrator | 2026-02-20 06:05:45.192677 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-20 06:05:45.192694 | orchestrator | Friday 20 February 2026 06:05:20 +0000 (0:00:01.857) 1:09:28.473 ******* 2026-02-20 06:05:45.192728 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-20 06:05:45.192747 | orchestrator | 2026-02-20 06:05:45.192765 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-20 06:05:45.192782 | orchestrator | Friday 20 February 2026 06:05:22 +0000 (0:00:01.145) 1:09:29.619 ******* 2026-02-20 06:05:45.192829 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.192850 | orchestrator | 2026-02-20 06:05:45.192868 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-20 06:05:45.192912 | orchestrator | Friday 20 February 2026 06:05:23 +0000 (0:00:01.127) 1:09:30.747 ******* 2026-02-20 06:05:45.192931 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.192948 | orchestrator | 2026-02-20 06:05:45.192966 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-20 06:05:45.192985 | orchestrator | Friday 20 February 2026 06:05:24 +0000 (0:00:01.125) 1:09:31.872 ******* 2026-02-20 06:05:45.193005 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-20 06:05:45.193016 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-20 06:05:45.193028 | orchestrator | 2026-02-20 06:05:45.193039 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-20 06:05:45.193050 | orchestrator | Friday 20 February 2026 06:05:26 +0000 (0:00:01.854) 1:09:33.726 ******* 2026-02-20 06:05:45.193061 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:45.193072 | orchestrator | 2026-02-20 06:05:45.193083 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-20 06:05:45.193105 | orchestrator | Friday 20 February 2026 06:05:27 +0000 (0:00:01.432) 1:09:35.159 ******* 2026-02-20 06:05:45.193116 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.193127 | orchestrator | 2026-02-20 06:05:45.193138 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-20 06:05:45.193149 | orchestrator | Friday 20 February 2026 06:05:28 +0000 (0:00:01.130) 1:09:36.290 ******* 2026-02-20 06:05:45.193160 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.193171 | orchestrator | 2026-02-20 06:05:45.193182 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-20 06:05:45.193193 | orchestrator | Friday 20 February 2026 06:05:29 +0000 (0:00:00.804) 1:09:37.094 ******* 2026-02-20 06:05:45.193204 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.193215 | orchestrator | 2026-02-20 06:05:45.193226 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-20 06:05:45.193238 | orchestrator | Friday 20 February 2026 06:05:30 +0000 (0:00:00.786) 1:09:37.881 ******* 2026-02-20 06:05:45.193248 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-20 06:05:45.193259 | orchestrator | 2026-02-20 06:05:45.193270 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-20 06:05:45.193281 | orchestrator | Friday 20 February 2026 06:05:31 +0000 (0:00:01.112) 1:09:38.993 ******* 2026-02-20 06:05:45.193292 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:45.193303 | orchestrator | 2026-02-20 06:05:45.193314 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-20 06:05:45.193325 | orchestrator | Friday 20 February 2026 06:05:33 +0000 (0:00:01.834) 1:09:40.828 ******* 2026-02-20 06:05:45.193336 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-20 06:05:45.193347 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-20 06:05:45.193358 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-20 06:05:45.193369 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.193385 | orchestrator | 2026-02-20 06:05:45.193403 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-20 06:05:45.193422 | orchestrator | Friday 20 February 2026 06:05:34 +0000 (0:00:01.161) 1:09:41.990 ******* 2026-02-20 06:05:45.193452 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.193470 | orchestrator | 2026-02-20 06:05:45.193488 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-20 06:05:45.193507 | orchestrator | Friday 20 February 2026 06:05:35 +0000 (0:00:01.138) 1:09:43.129 ******* 2026-02-20 06:05:45.193526 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.193545 | orchestrator | 2026-02-20 06:05:45.193564 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-20 06:05:45.193582 | orchestrator | Friday 20 February 2026 06:05:36 +0000 (0:00:01.145) 1:09:44.275 ******* 2026-02-20 06:05:45.193601 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.193618 | orchestrator | 2026-02-20 06:05:45.193630 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-20 06:05:45.193641 | orchestrator | Friday 20 February 2026 06:05:37 +0000 (0:00:01.095) 1:09:45.371 ******* 2026-02-20 06:05:45.193652 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.193663 | orchestrator | 2026-02-20 06:05:45.193674 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-20 06:05:45.193685 | orchestrator | Friday 20 February 2026 06:05:39 +0000 (0:00:01.116) 1:09:46.487 ******* 2026-02-20 06:05:45.193696 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.193707 | orchestrator | 2026-02-20 06:05:45.193718 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-20 06:05:45.193729 | orchestrator | Friday 20 February 2026 06:05:39 +0000 (0:00:00.761) 1:09:47.248 ******* 2026-02-20 06:05:45.193740 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:45.193751 | orchestrator | 2026-02-20 06:05:45.193762 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-20 06:05:45.193773 | orchestrator | Friday 20 February 2026 06:05:41 +0000 (0:00:02.187) 1:09:49.436 ******* 2026-02-20 06:05:45.193784 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:05:45.193795 | orchestrator | 2026-02-20 06:05:45.193827 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-20 06:05:45.193839 | orchestrator | Friday 20 February 2026 06:05:42 +0000 (0:00:00.781) 1:09:50.218 ******* 2026-02-20 06:05:45.193849 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-20 06:05:45.193860 | orchestrator | 2026-02-20 06:05:45.193871 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-20 06:05:45.193882 | orchestrator | Friday 20 February 2026 06:05:43 +0000 (0:00:01.228) 1:09:51.446 ******* 2026-02-20 06:05:45.193893 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:05:45.193904 | orchestrator | 2026-02-20 06:05:45.193915 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-20 06:05:45.193979 | orchestrator | Friday 20 February 2026 06:05:45 +0000 (0:00:01.214) 1:09:52.660 ******* 2026-02-20 06:06:26.976450 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.976581 | orchestrator | 2026-02-20 06:06:26.976601 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-20 06:06:26.976617 | orchestrator | Friday 20 February 2026 06:05:46 +0000 (0:00:01.132) 1:09:53.793 ******* 2026-02-20 06:06:26.976630 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.976643 | orchestrator | 2026-02-20 06:06:26.976674 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-20 06:06:26.976688 | orchestrator | Friday 20 February 2026 06:05:47 +0000 (0:00:01.123) 1:09:54.917 ******* 2026-02-20 06:06:26.976702 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.976715 | orchestrator | 2026-02-20 06:06:26.976728 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-20 06:06:26.976740 | orchestrator | Friday 20 February 2026 06:05:48 +0000 (0:00:01.149) 1:09:56.066 ******* 2026-02-20 06:06:26.976754 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.976767 | orchestrator | 2026-02-20 06:06:26.976890 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-20 06:06:26.976932 | orchestrator | Friday 20 February 2026 06:05:49 +0000 (0:00:01.124) 1:09:57.190 ******* 2026-02-20 06:06:26.976945 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.976958 | orchestrator | 2026-02-20 06:06:26.976971 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-20 06:06:26.976985 | orchestrator | Friday 20 February 2026 06:05:50 +0000 (0:00:01.122) 1:09:58.313 ******* 2026-02-20 06:06:26.976998 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.977013 | orchestrator | 2026-02-20 06:06:26.977027 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-20 06:06:26.977042 | orchestrator | Friday 20 February 2026 06:05:52 +0000 (0:00:01.199) 1:09:59.513 ******* 2026-02-20 06:06:26.977056 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.977070 | orchestrator | 2026-02-20 06:06:26.977083 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-20 06:06:26.977096 | orchestrator | Friday 20 February 2026 06:05:53 +0000 (0:00:01.161) 1:10:00.675 ******* 2026-02-20 06:06:26.977109 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:06:26.977124 | orchestrator | 2026-02-20 06:06:26.977137 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-20 06:06:26.977151 | orchestrator | Friday 20 February 2026 06:05:53 +0000 (0:00:00.784) 1:10:01.459 ******* 2026-02-20 06:06:26.977165 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-20 06:06:26.977180 | orchestrator | 2026-02-20 06:06:26.977192 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-20 06:06:26.977204 | orchestrator | Friday 20 February 2026 06:05:55 +0000 (0:00:01.195) 1:10:02.655 ******* 2026-02-20 06:06:26.977216 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-20 06:06:26.977229 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-20 06:06:26.977241 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-20 06:06:26.977254 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-20 06:06:26.977267 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-20 06:06:26.977279 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-20 06:06:26.977291 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-20 06:06:26.977303 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-20 06:06:26.977316 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-20 06:06:26.977328 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-20 06:06:26.977341 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-20 06:06:26.977353 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-20 06:06:26.977366 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-20 06:06:26.977379 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-20 06:06:26.977391 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-20 06:06:26.977403 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-20 06:06:26.977415 | orchestrator | 2026-02-20 06:06:26.977427 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-20 06:06:26.977440 | orchestrator | Friday 20 February 2026 06:06:01 +0000 (0:00:06.498) 1:10:09.153 ******* 2026-02-20 06:06:26.977452 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-20 06:06:26.977465 | orchestrator | 2026-02-20 06:06:26.977477 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-20 06:06:26.977490 | orchestrator | Friday 20 February 2026 06:06:02 +0000 (0:00:01.121) 1:10:10.275 ******* 2026-02-20 06:06:26.977502 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 06:06:26.977516 | orchestrator | 2026-02-20 06:06:26.977529 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-20 06:06:26.977554 | orchestrator | Friday 20 February 2026 06:06:04 +0000 (0:00:01.541) 1:10:11.817 ******* 2026-02-20 06:06:26.977567 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 06:06:26.977580 | orchestrator | 2026-02-20 06:06:26.977592 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-20 06:06:26.977605 | orchestrator | Friday 20 February 2026 06:06:05 +0000 (0:00:01.620) 1:10:13.437 ******* 2026-02-20 06:06:26.977617 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.977630 | orchestrator | 2026-02-20 06:06:26.977643 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-20 06:06:26.977681 | orchestrator | Friday 20 February 2026 06:06:06 +0000 (0:00:00.769) 1:10:14.207 ******* 2026-02-20 06:06:26.977696 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.977710 | orchestrator | 2026-02-20 06:06:26.977723 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-20 06:06:26.977735 | orchestrator | Friday 20 February 2026 06:06:07 +0000 (0:00:00.777) 1:10:14.984 ******* 2026-02-20 06:06:26.977748 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.977761 | orchestrator | 2026-02-20 06:06:26.977809 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-20 06:06:26.977819 | orchestrator | Friday 20 February 2026 06:06:08 +0000 (0:00:00.796) 1:10:15.780 ******* 2026-02-20 06:06:26.977827 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.977835 | orchestrator | 2026-02-20 06:06:26.977843 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-20 06:06:26.977851 | orchestrator | Friday 20 February 2026 06:06:09 +0000 (0:00:00.747) 1:10:16.528 ******* 2026-02-20 06:06:26.977859 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.977867 | orchestrator | 2026-02-20 06:06:26.977884 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-20 06:06:26.977892 | orchestrator | Friday 20 February 2026 06:06:09 +0000 (0:00:00.782) 1:10:17.311 ******* 2026-02-20 06:06:26.977901 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.977909 | orchestrator | 2026-02-20 06:06:26.977917 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-20 06:06:26.977925 | orchestrator | Friday 20 February 2026 06:06:10 +0000 (0:00:00.772) 1:10:18.083 ******* 2026-02-20 06:06:26.977933 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.977941 | orchestrator | 2026-02-20 06:06:26.977949 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-20 06:06:26.977957 | orchestrator | Friday 20 February 2026 06:06:11 +0000 (0:00:00.761) 1:10:18.845 ******* 2026-02-20 06:06:26.977965 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.977973 | orchestrator | 2026-02-20 06:06:26.977981 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-20 06:06:26.977989 | orchestrator | Friday 20 February 2026 06:06:12 +0000 (0:00:00.812) 1:10:19.658 ******* 2026-02-20 06:06:26.977997 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.978005 | orchestrator | 2026-02-20 06:06:26.978077 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-20 06:06:26.978088 | orchestrator | Friday 20 February 2026 06:06:12 +0000 (0:00:00.762) 1:10:20.420 ******* 2026-02-20 06:06:26.978097 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.978105 | orchestrator | 2026-02-20 06:06:26.978113 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-20 06:06:26.978121 | orchestrator | Friday 20 February 2026 06:06:13 +0000 (0:00:00.768) 1:10:21.188 ******* 2026-02-20 06:06:26.978129 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.978137 | orchestrator | 2026-02-20 06:06:26.978145 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-20 06:06:26.978153 | orchestrator | Friday 20 February 2026 06:06:14 +0000 (0:00:00.785) 1:10:21.974 ******* 2026-02-20 06:06:26.978170 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-20 06:06:26.978178 | orchestrator | 2026-02-20 06:06:26.978186 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-20 06:06:26.978194 | orchestrator | Friday 20 February 2026 06:06:18 +0000 (0:00:04.350) 1:10:26.324 ******* 2026-02-20 06:06:26.978202 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 06:06:26.978210 | orchestrator | 2026-02-20 06:06:26.978218 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-20 06:06:26.978226 | orchestrator | Friday 20 February 2026 06:06:19 +0000 (0:00:00.846) 1:10:27.172 ******* 2026-02-20 06:06:26.978236 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-20 06:06:26.978248 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-20 06:06:26.978258 | orchestrator | 2026-02-20 06:06:26.978266 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-20 06:06:26.978274 | orchestrator | Friday 20 February 2026 06:06:24 +0000 (0:00:04.903) 1:10:32.075 ******* 2026-02-20 06:06:26.978282 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.978290 | orchestrator | 2026-02-20 06:06:26.978298 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-20 06:06:26.978305 | orchestrator | Friday 20 February 2026 06:06:25 +0000 (0:00:00.804) 1:10:32.880 ******* 2026-02-20 06:06:26.978313 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.978321 | orchestrator | 2026-02-20 06:06:26.978329 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-20 06:06:26.978337 | orchestrator | Friday 20 February 2026 06:06:26 +0000 (0:00:00.770) 1:10:33.650 ******* 2026-02-20 06:06:26.978345 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:06:26.978353 | orchestrator | 2026-02-20 06:06:26.978361 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-20 06:06:26.978378 | orchestrator | Friday 20 February 2026 06:06:26 +0000 (0:00:00.798) 1:10:34.448 ******* 2026-02-20 06:07:33.639711 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:07:33.639834 | orchestrator | 2026-02-20 06:07:33.639845 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-20 06:07:33.639853 | orchestrator | Friday 20 February 2026 06:06:27 +0000 (0:00:00.795) 1:10:35.244 ******* 2026-02-20 06:07:33.639860 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:07:33.639866 | orchestrator | 2026-02-20 06:07:33.639872 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-20 06:07:33.639878 | orchestrator | Friday 20 February 2026 06:06:28 +0000 (0:00:00.827) 1:10:36.072 ******* 2026-02-20 06:07:33.639884 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:07:33.639891 | orchestrator | 2026-02-20 06:07:33.639897 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-20 06:07:33.639904 | orchestrator | Friday 20 February 2026 06:06:29 +0000 (0:00:00.880) 1:10:36.952 ******* 2026-02-20 06:07:33.639923 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 06:07:33.639929 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 06:07:33.639935 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 06:07:33.639942 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:07:33.639964 | orchestrator | 2026-02-20 06:07:33.639970 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-20 06:07:33.639976 | orchestrator | Friday 20 February 2026 06:06:30 +0000 (0:00:01.426) 1:10:38.379 ******* 2026-02-20 06:07:33.639982 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 06:07:33.639988 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 06:07:33.639994 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 06:07:33.640000 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:07:33.640006 | orchestrator | 2026-02-20 06:07:33.640012 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-20 06:07:33.640018 | orchestrator | Friday 20 February 2026 06:06:32 +0000 (0:00:01.384) 1:10:39.763 ******* 2026-02-20 06:07:33.640024 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-20 06:07:33.640030 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-20 06:07:33.640036 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-20 06:07:33.640041 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:07:33.640047 | orchestrator | 2026-02-20 06:07:33.640053 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-20 06:07:33.640059 | orchestrator | Friday 20 February 2026 06:06:33 +0000 (0:00:01.100) 1:10:40.864 ******* 2026-02-20 06:07:33.640065 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:07:33.640071 | orchestrator | 2026-02-20 06:07:33.640077 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-20 06:07:33.640083 | orchestrator | Friday 20 February 2026 06:06:34 +0000 (0:00:00.792) 1:10:41.657 ******* 2026-02-20 06:07:33.640089 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-20 06:07:33.640095 | orchestrator | 2026-02-20 06:07:33.640101 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-20 06:07:33.640107 | orchestrator | Friday 20 February 2026 06:06:35 +0000 (0:00:00.993) 1:10:42.650 ******* 2026-02-20 06:07:33.640113 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:07:33.640118 | orchestrator | 2026-02-20 06:07:33.640124 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-20 06:07:33.640130 | orchestrator | Friday 20 February 2026 06:06:36 +0000 (0:00:01.372) 1:10:44.023 ******* 2026-02-20 06:07:33.640136 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-02-20 06:07:33.640142 | orchestrator | 2026-02-20 06:07:33.640148 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-20 06:07:33.640154 | orchestrator | Friday 20 February 2026 06:06:37 +0000 (0:00:01.097) 1:10:45.121 ******* 2026-02-20 06:07:33.640160 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 06:07:33.640166 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-20 06:07:33.640172 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 06:07:33.640178 | orchestrator | 2026-02-20 06:07:33.640184 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-20 06:07:33.640198 | orchestrator | Friday 20 February 2026 06:06:40 +0000 (0:00:03.240) 1:10:48.362 ******* 2026-02-20 06:07:33.640204 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-20 06:07:33.640210 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-20 06:07:33.640216 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:07:33.640222 | orchestrator | 2026-02-20 06:07:33.640228 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-20 06:07:33.640234 | orchestrator | Friday 20 February 2026 06:06:42 +0000 (0:00:01.946) 1:10:50.308 ******* 2026-02-20 06:07:33.640240 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:07:33.640248 | orchestrator | 2026-02-20 06:07:33.640258 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-20 06:07:33.640268 | orchestrator | Friday 20 February 2026 06:06:43 +0000 (0:00:00.755) 1:10:51.064 ******* 2026-02-20 06:07:33.640285 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-02-20 06:07:33.640297 | orchestrator | 2026-02-20 06:07:33.640306 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-20 06:07:33.640316 | orchestrator | Friday 20 February 2026 06:06:44 +0000 (0:00:01.185) 1:10:52.249 ******* 2026-02-20 06:07:33.640326 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 06:07:33.640337 | orchestrator | 2026-02-20 06:07:33.640346 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-20 06:07:33.640355 | orchestrator | Friday 20 February 2026 06:06:46 +0000 (0:00:01.585) 1:10:53.835 ******* 2026-02-20 06:07:33.640380 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 06:07:33.640392 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-20 06:07:33.640404 | orchestrator | 2026-02-20 06:07:33.640415 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-20 06:07:33.640426 | orchestrator | Friday 20 February 2026 06:06:51 +0000 (0:00:05.260) 1:10:59.095 ******* 2026-02-20 06:07:33.640433 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-20 06:07:33.640440 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-20 06:07:33.640447 | orchestrator | 2026-02-20 06:07:33.640454 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-20 06:07:33.640467 | orchestrator | Friday 20 February 2026 06:06:54 +0000 (0:00:03.130) 1:11:02.225 ******* 2026-02-20 06:07:33.640474 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-20 06:07:33.640481 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:07:33.640488 | orchestrator | 2026-02-20 06:07:33.640495 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-20 06:07:33.640502 | orchestrator | Friday 20 February 2026 06:06:56 +0000 (0:00:01.653) 1:11:03.879 ******* 2026-02-20 06:07:33.640508 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-02-20 06:07:33.640515 | orchestrator | 2026-02-20 06:07:33.640522 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-20 06:07:33.640529 | orchestrator | Friday 20 February 2026 06:06:57 +0000 (0:00:01.136) 1:11:05.015 ******* 2026-02-20 06:07:33.640536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:07:33.640544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:07:33.640551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:07:33.640558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:07:33.640565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:07:33.640572 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:07:33.640579 | orchestrator | 2026-02-20 06:07:33.640586 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-20 06:07:33.640593 | orchestrator | Friday 20 February 2026 06:06:59 +0000 (0:00:01.624) 1:11:06.639 ******* 2026-02-20 06:07:33.640600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:07:33.640606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:07:33.640613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:07:33.640625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:07:33.640632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-20 06:07:33.640639 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:07:33.640646 | orchestrator | 2026-02-20 06:07:33.640653 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-20 06:07:33.640660 | orchestrator | Friday 20 February 2026 06:07:00 +0000 (0:00:01.611) 1:11:08.251 ******* 2026-02-20 06:07:33.640667 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 06:07:33.640674 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 06:07:33.640680 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 06:07:33.640686 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 06:07:33.640694 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-20 06:07:33.640700 | orchestrator | 2026-02-20 06:07:33.640706 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-20 06:07:33.640711 | orchestrator | Friday 20 February 2026 06:07:32 +0000 (0:00:32.122) 1:11:40.374 ******* 2026-02-20 06:07:33.640717 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:07:33.640723 | orchestrator | 2026-02-20 06:07:33.640744 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-20 06:07:33.640754 | orchestrator | Friday 20 February 2026 06:07:33 +0000 (0:00:00.740) 1:11:41.115 ******* 2026-02-20 06:08:25.628997 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:08:25.629147 | orchestrator | 2026-02-20 06:08:25.629180 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-20 06:08:25.629203 | orchestrator | Friday 20 February 2026 06:07:34 +0000 (0:00:00.779) 1:11:41.894 ******* 2026-02-20 06:08:25.629225 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-02-20 06:08:25.629246 | orchestrator | 2026-02-20 06:08:25.629266 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-20 06:08:25.629287 | orchestrator | Friday 20 February 2026 06:07:35 +0000 (0:00:01.087) 1:11:42.982 ******* 2026-02-20 06:08:25.629308 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-02-20 06:08:25.629325 | orchestrator | 2026-02-20 06:08:25.629337 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-20 06:08:25.629365 | orchestrator | Friday 20 February 2026 06:07:36 +0000 (0:00:01.075) 1:11:44.057 ******* 2026-02-20 06:08:25.629377 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:08:25.629389 | orchestrator | 2026-02-20 06:08:25.629401 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-20 06:08:25.629412 | orchestrator | Friday 20 February 2026 06:07:38 +0000 (0:00:02.032) 1:11:46.090 ******* 2026-02-20 06:08:25.629423 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:08:25.629436 | orchestrator | 2026-02-20 06:08:25.629455 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-20 06:08:25.629473 | orchestrator | Friday 20 February 2026 06:07:40 +0000 (0:00:01.924) 1:11:48.014 ******* 2026-02-20 06:08:25.629492 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:08:25.629511 | orchestrator | 2026-02-20 06:08:25.629530 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-20 06:08:25.629577 | orchestrator | Friday 20 February 2026 06:07:42 +0000 (0:00:02.225) 1:11:50.240 ******* 2026-02-20 06:08:25.629600 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-20 06:08:25.629620 | orchestrator | 2026-02-20 06:08:25.629640 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-02-20 06:08:25.629659 | orchestrator | skipping: no hosts matched 2026-02-20 06:08:25.629677 | orchestrator | 2026-02-20 06:08:25.629757 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-02-20 06:08:25.629777 | orchestrator | skipping: no hosts matched 2026-02-20 06:08:25.629793 | orchestrator | 2026-02-20 06:08:25.629804 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-02-20 06:08:25.629815 | orchestrator | skipping: no hosts matched 2026-02-20 06:08:25.629826 | orchestrator | 2026-02-20 06:08:25.629837 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-02-20 06:08:25.629848 | orchestrator | 2026-02-20 06:08:25.629859 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-02-20 06:08:25.629869 | orchestrator | Friday 20 February 2026 06:07:47 +0000 (0:00:04.325) 1:11:54.566 ******* 2026-02-20 06:08:25.629880 | orchestrator | changed: [testbed-node-0] 2026-02-20 06:08:25.629891 | orchestrator | changed: [testbed-node-1] 2026-02-20 06:08:25.629902 | orchestrator | changed: [testbed-node-3] 2026-02-20 06:08:25.629913 | orchestrator | changed: [testbed-node-2] 2026-02-20 06:08:25.629923 | orchestrator | changed: [testbed-node-4] 2026-02-20 06:08:25.629934 | orchestrator | changed: [testbed-node-5] 2026-02-20 06:08:25.629945 | orchestrator | 2026-02-20 06:08:25.629956 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-02-20 06:08:25.629967 | orchestrator | Friday 20 February 2026 06:07:49 +0000 (0:00:02.764) 1:11:57.331 ******* 2026-02-20 06:08:25.629978 | orchestrator | changed: [testbed-node-0] 2026-02-20 06:08:25.629989 | orchestrator | changed: [testbed-node-3] 2026-02-20 06:08:25.630000 | orchestrator | changed: [testbed-node-1] 2026-02-20 06:08:25.630010 | orchestrator | changed: [testbed-node-4] 2026-02-20 06:08:25.630081 | orchestrator | changed: [testbed-node-5] 2026-02-20 06:08:25.630093 | orchestrator | changed: [testbed-node-2] 2026-02-20 06:08:25.630104 | orchestrator | 2026-02-20 06:08:25.630115 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 06:08:25.630126 | orchestrator | Friday 20 February 2026 06:07:53 +0000 (0:00:03.861) 1:12:01.192 ******* 2026-02-20 06:08:25.630138 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:08:25.630149 | orchestrator | ok: [testbed-node-1] 2026-02-20 06:08:25.630160 | orchestrator | ok: [testbed-node-2] 2026-02-20 06:08:25.630174 | orchestrator | ok: [testbed-node-3] 2026-02-20 06:08:25.630194 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:08:25.630212 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:08:25.630230 | orchestrator | 2026-02-20 06:08:25.630249 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 06:08:25.630267 | orchestrator | Friday 20 February 2026 06:07:55 +0000 (0:00:02.191) 1:12:03.384 ******* 2026-02-20 06:08:25.630287 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:08:25.630306 | orchestrator | ok: [testbed-node-1] 2026-02-20 06:08:25.630326 | orchestrator | ok: [testbed-node-2] 2026-02-20 06:08:25.630346 | orchestrator | ok: [testbed-node-3] 2026-02-20 06:08:25.630365 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:08:25.630384 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:08:25.630403 | orchestrator | 2026-02-20 06:08:25.630414 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-20 06:08:25.630426 | orchestrator | Friday 20 February 2026 06:07:57 +0000 (0:00:01.821) 1:12:05.206 ******* 2026-02-20 06:08:25.630438 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 06:08:25.630470 | orchestrator | 2026-02-20 06:08:25.630481 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-20 06:08:25.630492 | orchestrator | Friday 20 February 2026 06:07:59 +0000 (0:00:01.959) 1:12:07.165 ******* 2026-02-20 06:08:25.630503 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 06:08:25.630514 | orchestrator | 2026-02-20 06:08:25.630548 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-20 06:08:25.630560 | orchestrator | Friday 20 February 2026 06:08:01 +0000 (0:00:02.016) 1:12:09.181 ******* 2026-02-20 06:08:25.630571 | orchestrator | skipping: [testbed-node-3] 2026-02-20 06:08:25.630582 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:08:25.630593 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:08:25.630604 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:08:25.630615 | orchestrator | ok: [testbed-node-1] 2026-02-20 06:08:25.630626 | orchestrator | ok: [testbed-node-2] 2026-02-20 06:08:25.630636 | orchestrator | 2026-02-20 06:08:25.630647 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-20 06:08:25.630658 | orchestrator | Friday 20 February 2026 06:08:03 +0000 (0:00:01.936) 1:12:11.118 ******* 2026-02-20 06:08:25.630669 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:08:25.630680 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:08:25.630715 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:08:25.630736 | orchestrator | ok: [testbed-node-3] 2026-02-20 06:08:25.630747 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:08:25.630758 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:08:25.630769 | orchestrator | 2026-02-20 06:08:25.630780 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-20 06:08:25.630791 | orchestrator | Friday 20 February 2026 06:08:05 +0000 (0:00:02.094) 1:12:13.212 ******* 2026-02-20 06:08:25.630806 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:08:25.630824 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:08:25.630842 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:08:25.630861 | orchestrator | ok: [testbed-node-3] 2026-02-20 06:08:25.630879 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:08:25.630898 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:08:25.630918 | orchestrator | 2026-02-20 06:08:25.630937 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-20 06:08:25.630956 | orchestrator | Friday 20 February 2026 06:08:07 +0000 (0:00:01.954) 1:12:15.166 ******* 2026-02-20 06:08:25.630969 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:08:25.630980 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:08:25.630991 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:08:25.631001 | orchestrator | ok: [testbed-node-3] 2026-02-20 06:08:25.631012 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:08:25.631023 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:08:25.631034 | orchestrator | 2026-02-20 06:08:25.631045 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-20 06:08:25.631056 | orchestrator | Friday 20 February 2026 06:08:09 +0000 (0:00:02.060) 1:12:17.226 ******* 2026-02-20 06:08:25.631066 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:08:25.631077 | orchestrator | skipping: [testbed-node-3] 2026-02-20 06:08:25.631087 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:08:25.631098 | orchestrator | ok: [testbed-node-1] 2026-02-20 06:08:25.631108 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:08:25.631119 | orchestrator | ok: [testbed-node-2] 2026-02-20 06:08:25.631130 | orchestrator | 2026-02-20 06:08:25.631141 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-20 06:08:25.631151 | orchestrator | Friday 20 February 2026 06:08:11 +0000 (0:00:02.200) 1:12:19.427 ******* 2026-02-20 06:08:25.631162 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:08:25.631173 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:08:25.631183 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:08:25.631204 | orchestrator | skipping: [testbed-node-3] 2026-02-20 06:08:25.631214 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:08:25.631225 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:08:25.631236 | orchestrator | 2026-02-20 06:08:25.631247 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-20 06:08:25.631258 | orchestrator | Friday 20 February 2026 06:08:13 +0000 (0:00:01.938) 1:12:21.366 ******* 2026-02-20 06:08:25.631268 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:08:25.631286 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:08:25.631305 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:08:25.631325 | orchestrator | skipping: [testbed-node-3] 2026-02-20 06:08:25.631343 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:08:25.631362 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:08:25.631380 | orchestrator | 2026-02-20 06:08:25.631397 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-20 06:08:25.631417 | orchestrator | Friday 20 February 2026 06:08:16 +0000 (0:00:02.450) 1:12:23.816 ******* 2026-02-20 06:08:25.631436 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:08:25.631455 | orchestrator | ok: [testbed-node-2] 2026-02-20 06:08:25.631474 | orchestrator | ok: [testbed-node-3] 2026-02-20 06:08:25.631493 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:08:25.631511 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:08:25.631528 | orchestrator | ok: [testbed-node-1] 2026-02-20 06:08:25.631539 | orchestrator | 2026-02-20 06:08:25.631550 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-20 06:08:25.631561 | orchestrator | Friday 20 February 2026 06:08:19 +0000 (0:00:02.827) 1:12:26.643 ******* 2026-02-20 06:08:25.631571 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:08:25.631582 | orchestrator | ok: [testbed-node-1] 2026-02-20 06:08:25.631593 | orchestrator | ok: [testbed-node-2] 2026-02-20 06:08:25.631603 | orchestrator | ok: [testbed-node-3] 2026-02-20 06:08:25.631614 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:08:25.631625 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:08:25.631635 | orchestrator | 2026-02-20 06:08:25.631646 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-20 06:08:25.631657 | orchestrator | Friday 20 February 2026 06:08:21 +0000 (0:00:02.598) 1:12:29.242 ******* 2026-02-20 06:08:25.631668 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:08:25.631679 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:08:25.631716 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:08:25.631728 | orchestrator | skipping: [testbed-node-3] 2026-02-20 06:08:25.631739 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:08:25.631750 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:08:25.631761 | orchestrator | 2026-02-20 06:08:25.631772 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-20 06:08:25.631782 | orchestrator | Friday 20 February 2026 06:08:23 +0000 (0:00:01.897) 1:12:31.139 ******* 2026-02-20 06:08:25.631793 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:08:25.631804 | orchestrator | ok: [testbed-node-1] 2026-02-20 06:08:25.631815 | orchestrator | ok: [testbed-node-2] 2026-02-20 06:08:25.631825 | orchestrator | skipping: [testbed-node-3] 2026-02-20 06:08:25.631836 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:08:25.631847 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:08:25.631858 | orchestrator | 2026-02-20 06:08:25.631881 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-20 06:09:21.413322 | orchestrator | Friday 20 February 2026 06:08:25 +0000 (0:00:01.960) 1:12:33.099 ******* 2026-02-20 06:09:21.413477 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:09:21.413496 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:09:21.413507 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:09:21.413518 | orchestrator | ok: [testbed-node-3] 2026-02-20 06:09:21.413529 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:09:21.413539 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:09:21.413551 | orchestrator | 2026-02-20 06:09:21.413564 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-20 06:09:21.413599 | orchestrator | Friday 20 February 2026 06:08:27 +0000 (0:00:01.890) 1:12:34.990 ******* 2026-02-20 06:09:21.413612 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:09:21.413624 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:09:21.413637 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:09:21.413698 | orchestrator | ok: [testbed-node-3] 2026-02-20 06:09:21.413711 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:09:21.413723 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:09:21.413734 | orchestrator | 2026-02-20 06:09:21.413747 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-20 06:09:21.413759 | orchestrator | Friday 20 February 2026 06:08:29 +0000 (0:00:01.953) 1:12:36.943 ******* 2026-02-20 06:09:21.413771 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:09:21.413784 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:09:21.413796 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:09:21.413808 | orchestrator | ok: [testbed-node-3] 2026-02-20 06:09:21.413820 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:09:21.413832 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:09:21.413844 | orchestrator | 2026-02-20 06:09:21.413857 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-20 06:09:21.413869 | orchestrator | Friday 20 February 2026 06:08:31 +0000 (0:00:02.055) 1:12:38.999 ******* 2026-02-20 06:09:21.413882 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:09:21.413894 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:09:21.413906 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:09:21.413918 | orchestrator | skipping: [testbed-node-3] 2026-02-20 06:09:21.413931 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:09:21.413943 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:09:21.413956 | orchestrator | 2026-02-20 06:09:21.413968 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-20 06:09:21.413981 | orchestrator | Friday 20 February 2026 06:08:33 +0000 (0:00:01.909) 1:12:40.909 ******* 2026-02-20 06:09:21.413994 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:09:21.414006 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:09:21.414076 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:09:21.414089 | orchestrator | skipping: [testbed-node-3] 2026-02-20 06:09:21.414102 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:09:21.414114 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:09:21.414125 | orchestrator | 2026-02-20 06:09:21.414139 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-20 06:09:21.414151 | orchestrator | Friday 20 February 2026 06:08:35 +0000 (0:00:02.198) 1:12:43.107 ******* 2026-02-20 06:09:21.414163 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:09:21.414175 | orchestrator | ok: [testbed-node-1] 2026-02-20 06:09:21.414188 | orchestrator | ok: [testbed-node-2] 2026-02-20 06:09:21.414200 | orchestrator | skipping: [testbed-node-3] 2026-02-20 06:09:21.414212 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:09:21.414224 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:09:21.414236 | orchestrator | 2026-02-20 06:09:21.414247 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-20 06:09:21.414259 | orchestrator | Friday 20 February 2026 06:08:37 +0000 (0:00:01.913) 1:12:45.020 ******* 2026-02-20 06:09:21.414270 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:09:21.414282 | orchestrator | ok: [testbed-node-1] 2026-02-20 06:09:21.414293 | orchestrator | ok: [testbed-node-2] 2026-02-20 06:09:21.414304 | orchestrator | ok: [testbed-node-3] 2026-02-20 06:09:21.414316 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:09:21.414327 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:09:21.414339 | orchestrator | 2026-02-20 06:09:21.414350 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-20 06:09:21.414361 | orchestrator | Friday 20 February 2026 06:08:39 +0000 (0:00:02.259) 1:12:47.280 ******* 2026-02-20 06:09:21.414373 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:09:21.414384 | orchestrator | ok: [testbed-node-1] 2026-02-20 06:09:21.414405 | orchestrator | ok: [testbed-node-2] 2026-02-20 06:09:21.414417 | orchestrator | ok: [testbed-node-3] 2026-02-20 06:09:21.414428 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:09:21.414440 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:09:21.414452 | orchestrator | 2026-02-20 06:09:21.414463 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-20 06:09:21.414475 | orchestrator | Friday 20 February 2026 06:08:42 +0000 (0:00:02.277) 1:12:49.558 ******* 2026-02-20 06:09:21.414486 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:09:21.414498 | orchestrator | 2026-02-20 06:09:21.414510 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-20 06:09:21.414521 | orchestrator | Friday 20 February 2026 06:08:45 +0000 (0:00:03.194) 1:12:52.753 ******* 2026-02-20 06:09:21.414533 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:09:21.414545 | orchestrator | 2026-02-20 06:09:21.414556 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-20 06:09:21.414568 | orchestrator | Friday 20 February 2026 06:08:48 +0000 (0:00:03.226) 1:12:55.980 ******* 2026-02-20 06:09:21.414580 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:09:21.414591 | orchestrator | ok: [testbed-node-1] 2026-02-20 06:09:21.414603 | orchestrator | ok: [testbed-node-3] 2026-02-20 06:09:21.414615 | orchestrator | ok: [testbed-node-2] 2026-02-20 06:09:21.414626 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:09:21.414638 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:09:21.414665 | orchestrator | 2026-02-20 06:09:21.414675 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-20 06:09:21.414684 | orchestrator | Friday 20 February 2026 06:08:51 +0000 (0:00:02.724) 1:12:58.705 ******* 2026-02-20 06:09:21.414695 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:09:21.414707 | orchestrator | ok: [testbed-node-1] 2026-02-20 06:09:21.414718 | orchestrator | ok: [testbed-node-2] 2026-02-20 06:09:21.414729 | orchestrator | ok: [testbed-node-3] 2026-02-20 06:09:21.414741 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:09:21.414752 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:09:21.414764 | orchestrator | 2026-02-20 06:09:21.414775 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-20 06:09:21.414808 | orchestrator | Friday 20 February 2026 06:08:53 +0000 (0:00:02.383) 1:13:01.089 ******* 2026-02-20 06:09:21.414821 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-20 06:09:21.414834 | orchestrator | 2026-02-20 06:09:21.414845 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-20 06:09:21.414857 | orchestrator | Friday 20 February 2026 06:08:56 +0000 (0:00:02.452) 1:13:03.541 ******* 2026-02-20 06:09:21.414868 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:09:21.414880 | orchestrator | ok: [testbed-node-1] 2026-02-20 06:09:21.414891 | orchestrator | ok: [testbed-node-3] 2026-02-20 06:09:21.414902 | orchestrator | ok: [testbed-node-2] 2026-02-20 06:09:21.414914 | orchestrator | ok: [testbed-node-4] 2026-02-20 06:09:21.414931 | orchestrator | ok: [testbed-node-5] 2026-02-20 06:09:21.414942 | orchestrator | 2026-02-20 06:09:21.414954 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-20 06:09:21.414965 | orchestrator | Friday 20 February 2026 06:08:58 +0000 (0:00:02.690) 1:13:06.232 ******* 2026-02-20 06:09:21.414977 | orchestrator | changed: [testbed-node-4] 2026-02-20 06:09:21.414989 | orchestrator | changed: [testbed-node-1] 2026-02-20 06:09:21.415000 | orchestrator | changed: [testbed-node-0] 2026-02-20 06:09:21.415012 | orchestrator | changed: [testbed-node-3] 2026-02-20 06:09:21.415024 | orchestrator | changed: [testbed-node-5] 2026-02-20 06:09:21.415035 | orchestrator | changed: [testbed-node-2] 2026-02-20 06:09:21.415046 | orchestrator | 2026-02-20 06:09:21.415058 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-02-20 06:09:21.415068 | orchestrator | 2026-02-20 06:09:21.415080 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 06:09:21.415099 | orchestrator | Friday 20 February 2026 06:09:03 +0000 (0:00:04.728) 1:13:10.960 ******* 2026-02-20 06:09:21.415110 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:09:21.415122 | orchestrator | ok: [testbed-node-1] 2026-02-20 06:09:21.415134 | orchestrator | ok: [testbed-node-2] 2026-02-20 06:09:21.415145 | orchestrator | 2026-02-20 06:09:21.415157 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 06:09:21.415168 | orchestrator | Friday 20 February 2026 06:09:05 +0000 (0:00:01.647) 1:13:12.608 ******* 2026-02-20 06:09:21.415180 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:09:21.415191 | orchestrator | ok: [testbed-node-1] 2026-02-20 06:09:21.415203 | orchestrator | ok: [testbed-node-2] 2026-02-20 06:09:21.415214 | orchestrator | 2026-02-20 06:09:21.415226 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-20 06:09:21.415239 | orchestrator | Friday 20 February 2026 06:09:06 +0000 (0:00:01.824) 1:13:14.433 ******* 2026-02-20 06:09:21.415250 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:09:21.415261 | orchestrator | 2026-02-20 06:09:21.415273 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-20 06:09:21.415285 | orchestrator | Friday 20 February 2026 06:09:09 +0000 (0:00:02.295) 1:13:16.729 ******* 2026-02-20 06:09:21.415296 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:09:21.415307 | orchestrator | 2026-02-20 06:09:21.415319 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-02-20 06:09:21.415330 | orchestrator | 2026-02-20 06:09:21.415341 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-02-20 06:09:21.415353 | orchestrator | Friday 20 February 2026 06:09:11 +0000 (0:00:02.201) 1:13:18.930 ******* 2026-02-20 06:09:21.415365 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:09:21.415377 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:09:21.415388 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:09:21.415400 | orchestrator | skipping: [testbed-node-3] 2026-02-20 06:09:21.415411 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:09:21.415423 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:09:21.415434 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:09:21.415445 | orchestrator | 2026-02-20 06:09:21.415457 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 06:09:21.415469 | orchestrator | Friday 20 February 2026 06:09:13 +0000 (0:00:01.891) 1:13:20.822 ******* 2026-02-20 06:09:21.415480 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:09:21.415491 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:09:21.415502 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:09:21.415514 | orchestrator | skipping: [testbed-node-3] 2026-02-20 06:09:21.415525 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:09:21.415537 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:09:21.415549 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:09:21.415560 | orchestrator | 2026-02-20 06:09:21.415571 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-20 06:09:21.415583 | orchestrator | Friday 20 February 2026 06:09:15 +0000 (0:00:02.575) 1:13:23.397 ******* 2026-02-20 06:09:21.415594 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:09:21.415606 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:09:21.415617 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:09:21.415629 | orchestrator | skipping: [testbed-node-3] 2026-02-20 06:09:21.415641 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:09:21.415677 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:09:21.415689 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:09:21.415700 | orchestrator | 2026-02-20 06:09:21.415711 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-20 06:09:21.415724 | orchestrator | Friday 20 February 2026 06:09:18 +0000 (0:00:02.297) 1:13:25.694 ******* 2026-02-20 06:09:21.415736 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:09:21.415747 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:09:21.415759 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:09:21.415777 | orchestrator | skipping: [testbed-node-3] 2026-02-20 06:09:21.415789 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:09:21.415800 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:09:21.415812 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:09:21.415824 | orchestrator | 2026-02-20 06:09:21.415835 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-02-20 06:09:21.415848 | orchestrator | Friday 20 February 2026 06:09:20 +0000 (0:00:02.646) 1:13:28.341 ******* 2026-02-20 06:09:21.415857 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:09:21.415867 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:09:21.415877 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:09:21.415895 | orchestrator | skipping: [testbed-node-3] 2026-02-20 06:10:09.754391 | orchestrator | skipping: [testbed-node-4] 2026-02-20 06:10:09.754520 | orchestrator | skipping: [testbed-node-5] 2026-02-20 06:10:09.754539 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:09.754556 | orchestrator | 2026-02-20 06:10:09.754573 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-02-20 06:10:09.754591 | orchestrator | 2026-02-20 06:10:09.754607 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-02-20 06:10:09.754684 | orchestrator | Friday 20 February 2026 06:09:23 +0000 (0:00:02.858) 1:13:31.200 ******* 2026-02-20 06:10:09.754700 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-02-20 06:10:09.754717 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-02-20 06:10:09.754733 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-02-20 06:10:09.754765 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:09.754781 | orchestrator | 2026-02-20 06:10:09.754797 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-20 06:10:09.754813 | orchestrator | Friday 20 February 2026 06:09:24 +0000 (0:00:01.111) 1:13:32.312 ******* 2026-02-20 06:10:09.754828 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:09.754843 | orchestrator | 2026-02-20 06:10:09.754859 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-20 06:10:09.754875 | orchestrator | Friday 20 February 2026 06:09:25 +0000 (0:00:01.138) 1:13:33.450 ******* 2026-02-20 06:10:09.754891 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:09.754906 | orchestrator | 2026-02-20 06:10:09.754924 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-20 06:10:09.754939 | orchestrator | Friday 20 February 2026 06:09:27 +0000 (0:00:01.112) 1:13:34.563 ******* 2026-02-20 06:10:09.754955 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:09.754970 | orchestrator | 2026-02-20 06:10:09.754986 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-20 06:10:09.755001 | orchestrator | Friday 20 February 2026 06:09:28 +0000 (0:00:01.117) 1:13:35.681 ******* 2026-02-20 06:10:09.755017 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:09.755032 | orchestrator | 2026-02-20 06:10:09.755046 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-02-20 06:10:09.755061 | orchestrator | Friday 20 February 2026 06:09:29 +0000 (0:00:01.103) 1:13:36.784 ******* 2026-02-20 06:10:09.755076 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-02-20 06:10:09.755092 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-02-20 06:10:09.755107 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:09.755124 | orchestrator | 2026-02-20 06:10:09.755140 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-02-20 06:10:09.755154 | orchestrator | Friday 20 February 2026 06:09:30 +0000 (0:00:01.193) 1:13:37.977 ******* 2026-02-20 06:10:09.755170 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:09.755184 | orchestrator | 2026-02-20 06:10:09.755199 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-02-20 06:10:09.755215 | orchestrator | Friday 20 February 2026 06:09:31 +0000 (0:00:01.112) 1:13:39.090 ******* 2026-02-20 06:10:09.755257 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:09.755273 | orchestrator | 2026-02-20 06:10:09.755288 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-02-20 06:10:09.755303 | orchestrator | Friday 20 February 2026 06:09:32 +0000 (0:00:01.105) 1:13:40.195 ******* 2026-02-20 06:10:09.755318 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:09.755333 | orchestrator | 2026-02-20 06:10:09.755347 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-02-20 06:10:09.755362 | orchestrator | Friday 20 February 2026 06:09:33 +0000 (0:00:01.106) 1:13:41.301 ******* 2026-02-20 06:10:09.755377 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-02-20 06:10:09.755392 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-02-20 06:10:09.755408 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:09.755423 | orchestrator | 2026-02-20 06:10:09.755438 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-02-20 06:10:09.755453 | orchestrator | Friday 20 February 2026 06:09:34 +0000 (0:00:01.096) 1:13:42.398 ******* 2026-02-20 06:10:09.755469 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:09.755483 | orchestrator | 2026-02-20 06:10:09.755499 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-02-20 06:10:09.755514 | orchestrator | Friday 20 February 2026 06:09:36 +0000 (0:00:01.176) 1:13:43.575 ******* 2026-02-20 06:10:09.755530 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:09.755544 | orchestrator | 2026-02-20 06:10:09.755560 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-02-20 06:10:09.755575 | orchestrator | Friday 20 February 2026 06:09:37 +0000 (0:00:01.122) 1:13:44.697 ******* 2026-02-20 06:10:09.755591 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:09.755606 | orchestrator | 2026-02-20 06:10:09.755645 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-02-20 06:10:09.755661 | orchestrator | Friday 20 February 2026 06:09:38 +0000 (0:00:01.103) 1:13:45.800 ******* 2026-02-20 06:10:09.755675 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:09.755690 | orchestrator | 2026-02-20 06:10:09.755704 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-02-20 06:10:09.755719 | orchestrator | 2026-02-20 06:10:09.755734 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-20 06:10:09.755750 | orchestrator | Friday 20 February 2026 06:09:40 +0000 (0:00:01.884) 1:13:47.684 ******* 2026-02-20 06:10:09.755765 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:10:09.755781 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:10:09.755796 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:10:09.755811 | orchestrator | 2026-02-20 06:10:09.755828 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-20 06:10:09.755844 | orchestrator | Friday 20 February 2026 06:09:41 +0000 (0:00:01.306) 1:13:48.990 ******* 2026-02-20 06:10:09.755860 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:10:09.755874 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:10:09.755914 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:10:09.755929 | orchestrator | 2026-02-20 06:10:09.755944 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-20 06:10:09.755959 | orchestrator | Friday 20 February 2026 06:09:42 +0000 (0:00:01.398) 1:13:50.389 ******* 2026-02-20 06:10:09.755974 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:10:09.755990 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:10:09.756004 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:10:09.756020 | orchestrator | 2026-02-20 06:10:09.756035 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-20 06:10:09.756050 | orchestrator | Friday 20 February 2026 06:09:44 +0000 (0:00:01.618) 1:13:52.007 ******* 2026-02-20 06:10:09.756066 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:10:09.756081 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:10:09.756104 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:10:09.756130 | orchestrator | 2026-02-20 06:10:09.756146 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-20 06:10:09.756162 | orchestrator | Friday 20 February 2026 06:09:45 +0000 (0:00:01.339) 1:13:53.347 ******* 2026-02-20 06:10:09.756177 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:10:09.756192 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:10:09.756208 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:10:09.756223 | orchestrator | 2026-02-20 06:10:09.756239 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-02-20 06:10:09.756254 | orchestrator | Friday 20 February 2026 06:09:47 +0000 (0:00:01.310) 1:13:54.658 ******* 2026-02-20 06:10:09.756269 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:10:09.756285 | orchestrator | skipping: [testbed-node-1] 2026-02-20 06:10:09.756301 | orchestrator | skipping: [testbed-node-2] 2026-02-20 06:10:09.756317 | orchestrator | 2026-02-20 06:10:09.756333 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-02-20 06:10:09.756349 | orchestrator | Friday 20 February 2026 06:09:48 +0000 (0:00:01.583) 1:13:56.241 ******* 2026-02-20 06:10:09.756363 | orchestrator | skipping: [testbed-node-0] 2026-02-20 06:10:09.756376 | orchestrator | 2026-02-20 06:10:09.756389 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-02-20 06:10:09.756404 | orchestrator | 2026-02-20 06:10:09.756419 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-20 06:10:09.756435 | orchestrator | Friday 20 February 2026 06:09:50 +0000 (0:00:01.531) 1:13:57.773 ******* 2026-02-20 06:10:09.756451 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:10:09.756467 | orchestrator | 2026-02-20 06:10:09.756480 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-20 06:10:09.756493 | orchestrator | Friday 20 February 2026 06:09:51 +0000 (0:00:01.435) 1:13:59.209 ******* 2026-02-20 06:10:09.756506 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:10:09.756519 | orchestrator | 2026-02-20 06:10:09.756532 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-02-20 06:10:09.756546 | orchestrator | Friday 20 February 2026 06:09:52 +0000 (0:00:01.171) 1:14:00.380 ******* 2026-02-20 06:10:09.756562 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:10:09.756577 | orchestrator | 2026-02-20 06:10:09.756592 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-02-20 06:10:09.756608 | orchestrator | Friday 20 February 2026 06:09:54 +0000 (0:00:01.159) 1:14:01.540 ******* 2026-02-20 06:10:09.756701 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:10:09.756716 | orchestrator | 2026-02-20 06:10:09.756733 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-02-20 06:10:09.756747 | orchestrator | Friday 20 February 2026 06:09:57 +0000 (0:00:02.963) 1:14:04.503 ******* 2026-02-20 06:10:09.756763 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:10:09.756779 | orchestrator | 2026-02-20 06:10:09.756795 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-02-20 06:10:09.756810 | orchestrator | Friday 20 February 2026 06:10:00 +0000 (0:00:03.302) 1:14:07.806 ******* 2026-02-20 06:10:09.756824 | orchestrator | changed: [testbed-node-0] 2026-02-20 06:10:09.756839 | orchestrator | 2026-02-20 06:10:09.756856 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-02-20 06:10:09.756870 | orchestrator | 2026-02-20 06:10:09.756885 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-02-20 06:10:09.756902 | orchestrator | Friday 20 February 2026 06:10:02 +0000 (0:00:02.059) 1:14:09.865 ******* 2026-02-20 06:10:09.756916 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:10:09.756933 | orchestrator | ok: [testbed-node-1] 2026-02-20 06:10:09.756949 | orchestrator | ok: [testbed-node-2] 2026-02-20 06:10:09.756965 | orchestrator | 2026-02-20 06:10:09.756980 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-02-20 06:10:09.756997 | orchestrator | Friday 20 February 2026 06:10:03 +0000 (0:00:01.451) 1:14:11.317 ******* 2026-02-20 06:10:09.757026 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:10:09.757041 | orchestrator | 2026-02-20 06:10:09.757058 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-02-20 06:10:09.757074 | orchestrator | Friday 20 February 2026 06:10:06 +0000 (0:00:02.375) 1:14:13.693 ******* 2026-02-20 06:10:09.757090 | orchestrator | ok: [testbed-node-0] 2026-02-20 06:10:09.757105 | orchestrator | 2026-02-20 06:10:09.757120 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 06:10:09.757136 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-20 06:10:09.757154 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-02-20 06:10:09.757170 | orchestrator | testbed-node-0 : ok=248  changed=19  unreachable=0 failed=0 skipped=369  rescued=0 ignored=0 2026-02-20 06:10:09.757184 | orchestrator | testbed-node-1 : ok=191  changed=14  unreachable=0 failed=0 skipped=343  rescued=0 ignored=0 2026-02-20 06:10:09.757214 | orchestrator | testbed-node-2 : ok=196  changed=14  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-02-20 06:10:10.497664 | orchestrator | testbed-node-3 : ok=311  changed=21  unreachable=0 failed=0 skipped=341  rescued=0 ignored=0 2026-02-20 06:10:10.497762 | orchestrator | testbed-node-4 : ok=307  changed=17  unreachable=0 failed=0 skipped=352  rescued=0 ignored=0 2026-02-20 06:10:10.497804 | orchestrator | testbed-node-5 : ok=309  changed=16  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-02-20 06:10:10.497822 | orchestrator | 2026-02-20 06:10:10.497839 | orchestrator | 2026-02-20 06:10:10.497854 | orchestrator | 2026-02-20 06:10:10.497868 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 06:10:10.497878 | orchestrator | Friday 20 February 2026 06:10:09 +0000 (0:00:03.506) 1:14:17.200 ******* 2026-02-20 06:10:10.497894 | orchestrator | =============================================================================== 2026-02-20 06:10:10.497908 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 77.03s 2026-02-20 06:10:10.497923 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 75.12s 2026-02-20 06:10:10.497938 | orchestrator | Gather and delegate facts ---------------------------------------------- 33.81s 2026-02-20 06:10:10.497953 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 33.62s 2026-02-20 06:10:10.497968 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.51s 2026-02-20 06:10:10.497984 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.49s 2026-02-20 06:10:10.497998 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.12s 2026-02-20 06:10:10.498007 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 30.67s 2026-02-20 06:10:10.498084 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 24.21s 2026-02-20 06:10:10.498102 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 23.03s 2026-02-20 06:10:10.498118 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.93s 2026-02-20 06:10:10.498133 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 18.17s 2026-02-20 06:10:10.498148 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 16.93s 2026-02-20 06:10:10.498164 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.59s 2026-02-20 06:10:10.498179 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 14.89s 2026-02-20 06:10:10.498195 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 13.77s 2026-02-20 06:10:10.498239 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 13.61s 2026-02-20 06:10:10.498254 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 13.20s 2026-02-20 06:10:10.498271 | orchestrator | Stop ceph osd ---------------------------------------------------------- 11.76s 2026-02-20 06:10:10.498286 | orchestrator | Stop standby ceph mds -------------------------------------------------- 11.14s 2026-02-20 06:10:10.785103 | orchestrator | + osism apply cephclient 2026-02-20 06:10:12.804920 | orchestrator | 2026-02-20 06:10:12 | INFO  | Task d9ac2b49-b07b-42ee-997f-5b7135f6bf5d (cephclient) was prepared for execution. 2026-02-20 06:10:12.805036 | orchestrator | 2026-02-20 06:10:12 | INFO  | It takes a moment until task d9ac2b49-b07b-42ee-997f-5b7135f6bf5d (cephclient) has been started and output is visible here. 2026-02-20 06:10:39.889096 | orchestrator | 2026-02-20 06:10:39.889240 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-20 06:10:39.889267 | orchestrator | 2026-02-20 06:10:39.889284 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-20 06:10:39.889296 | orchestrator | Friday 20 February 2026 06:10:20 +0000 (0:00:02.847) 0:00:02.847 ******* 2026-02-20 06:10:39.889306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-20 06:10:39.889318 | orchestrator | 2026-02-20 06:10:39.889328 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-20 06:10:39.889338 | orchestrator | Friday 20 February 2026 06:10:22 +0000 (0:00:01.752) 0:00:04.599 ******* 2026-02-20 06:10:39.889348 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-20 06:10:39.889359 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-20 06:10:39.889370 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-20 06:10:39.889380 | orchestrator | 2026-02-20 06:10:39.889389 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-20 06:10:39.889399 | orchestrator | Friday 20 February 2026 06:10:24 +0000 (0:00:02.292) 0:00:06.892 ******* 2026-02-20 06:10:39.889410 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-20 06:10:39.889420 | orchestrator | 2026-02-20 06:10:39.889430 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-20 06:10:39.889439 | orchestrator | Friday 20 February 2026 06:10:26 +0000 (0:00:01.993) 0:00:08.885 ******* 2026-02-20 06:10:39.889449 | orchestrator | ok: [testbed-manager] 2026-02-20 06:10:39.889459 | orchestrator | 2026-02-20 06:10:39.889469 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-20 06:10:39.889479 | orchestrator | Friday 20 February 2026 06:10:28 +0000 (0:00:01.752) 0:00:10.638 ******* 2026-02-20 06:10:39.889489 | orchestrator | ok: [testbed-manager] 2026-02-20 06:10:39.889499 | orchestrator | 2026-02-20 06:10:39.889508 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-20 06:10:39.889518 | orchestrator | Friday 20 February 2026 06:10:29 +0000 (0:00:01.740) 0:00:12.378 ******* 2026-02-20 06:10:39.889528 | orchestrator | ok: [testbed-manager] 2026-02-20 06:10:39.889538 | orchestrator | 2026-02-20 06:10:39.889547 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-20 06:10:39.889557 | orchestrator | Friday 20 February 2026 06:10:31 +0000 (0:00:01.918) 0:00:14.297 ******* 2026-02-20 06:10:39.889567 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-20 06:10:39.889628 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-02-20 06:10:39.889648 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-20 06:10:39.889661 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-20 06:10:39.889673 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-20 06:10:39.889685 | orchestrator | 2026-02-20 06:10:39.889696 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-20 06:10:39.889731 | orchestrator | Friday 20 February 2026 06:10:36 +0000 (0:00:04.258) 0:00:18.556 ******* 2026-02-20 06:10:39.889743 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-20 06:10:39.889755 | orchestrator | 2026-02-20 06:10:39.889765 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-20 06:10:39.889777 | orchestrator | Friday 20 February 2026 06:10:37 +0000 (0:00:01.151) 0:00:19.707 ******* 2026-02-20 06:10:39.889788 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:39.889799 | orchestrator | 2026-02-20 06:10:39.889810 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-20 06:10:39.889821 | orchestrator | Friday 20 February 2026 06:10:38 +0000 (0:00:01.101) 0:00:20.809 ******* 2026-02-20 06:10:39.889832 | orchestrator | skipping: [testbed-manager] 2026-02-20 06:10:39.889844 | orchestrator | 2026-02-20 06:10:39.889854 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-20 06:10:39.889866 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-20 06:10:39.889878 | orchestrator | 2026-02-20 06:10:39.889890 | orchestrator | 2026-02-20 06:10:39.889901 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-20 06:10:39.889912 | orchestrator | Friday 20 February 2026 06:10:39 +0000 (0:00:01.422) 0:00:22.231 ******* 2026-02-20 06:10:39.889923 | orchestrator | =============================================================================== 2026-02-20 06:10:39.889934 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.26s 2026-02-20 06:10:39.889946 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.29s 2026-02-20 06:10:39.889957 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.99s 2026-02-20 06:10:39.889969 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 1.92s 2026-02-20 06:10:39.889980 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 1.75s 2026-02-20 06:10:39.889992 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.75s 2026-02-20 06:10:39.890003 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.74s 2026-02-20 06:10:39.890013 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.42s 2026-02-20 06:10:39.890074 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.15s 2026-02-20 06:10:39.890084 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 1.10s 2026-02-20 06:10:40.086894 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-20 06:10:40.087012 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-02-20 06:10:40.092245 | orchestrator | + set -e 2026-02-20 06:10:40.092319 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-20 06:10:40.092333 | orchestrator | ++ export INTERACTIVE=false 2026-02-20 06:10:40.092435 | orchestrator | ++ INTERACTIVE=false 2026-02-20 06:10:40.092447 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-20 06:10:40.092458 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-20 06:10:40.092469 | orchestrator | + source /opt/manager-vars.sh 2026-02-20 06:10:40.092480 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-20 06:10:40.092491 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-20 06:10:40.092502 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-20 06:10:40.092513 | orchestrator | ++ CEPH_VERSION=reef 2026-02-20 06:10:40.092524 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-20 06:10:40.092534 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-20 06:10:40.092545 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-20 06:10:40.092556 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-20 06:10:40.092567 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-20 06:10:40.092579 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-20 06:10:40.092620 | orchestrator | ++ export ARA=false 2026-02-20 06:10:40.092642 | orchestrator | ++ ARA=false 2026-02-20 06:10:40.092655 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-20 06:10:40.092666 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-20 06:10:40.092677 | orchestrator | ++ export TEMPEST=false 2026-02-20 06:10:40.092688 | orchestrator | ++ TEMPEST=false 2026-02-20 06:10:40.092726 | orchestrator | ++ export IS_ZUUL=true 2026-02-20 06:10:40.092738 | orchestrator | ++ IS_ZUUL=true 2026-02-20 06:10:40.092749 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 06:10:40.092760 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.191 2026-02-20 06:10:40.092771 | orchestrator | ++ export EXTERNAL_API=false 2026-02-20 06:10:40.092782 | orchestrator | ++ EXTERNAL_API=false 2026-02-20 06:10:40.092793 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-20 06:10:40.092804 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-20 06:10:40.092815 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-20 06:10:40.092826 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-20 06:10:40.092837 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-20 06:10:40.092848 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-20 06:10:40.092859 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-20 06:10:40.092869 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-20 06:10:40.092881 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-20 06:10:40.092905 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-20 06:10:40.095354 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-20 06:10:40.095401 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-20 06:10:40.095417 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-20 06:10:40.095437 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-02-20 06:10:59.623982 | orchestrator | 2026-02-20 06:10:59 | ERROR  | Unable to get ansible vault password 2026-02-20 06:10:59.624131 | orchestrator | 2026-02-20 06:10:59 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-20 06:10:59.624164 | orchestrator | 2026-02-20 06:10:59 | ERROR  | Dropping encrypted entries 2026-02-20 06:10:59.664829 | orchestrator | 2026-02-20 06:10:59 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-20 06:10:59.665440 | orchestrator | 2026-02-20 06:10:59 | INFO  | Kolla configuration check passed 2026-02-20 06:10:59.869645 | orchestrator | 2026-02-20 06:10:59 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-02-20 06:10:59.884863 | orchestrator | 2026-02-20 06:10:59 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-02-20 06:11:00.105079 | orchestrator | + osism migrate rabbitmq3to4 list 2026-02-20 06:11:18.211292 | orchestrator | 2026-02-20 06:11:18 | ERROR  | Unable to get ansible vault password 2026-02-20 06:11:18.211377 | orchestrator | 2026-02-20 06:11:18 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-20 06:11:18.211388 | orchestrator | 2026-02-20 06:11:18 | ERROR  | Dropping encrypted entries 2026-02-20 06:11:18.245691 | orchestrator | 2026-02-20 06:11:18 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-20 06:11:18.401067 | orchestrator | 2026-02-20 06:11:18 | INFO  | Found 206 classic queue(s) in vhost '/': 2026-02-20 06:11:18.401263 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-02-20 06:11:18.401354 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-02-20 06:11:18.401380 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-02-20 06:11:18.401392 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-02-20 06:11:18.401403 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - barbican.workers_fanout_4989b8befdae4a529065eb003ba6451c (vhost: /, messages: 0) 2026-02-20 06:11:18.401414 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - barbican.workers_fanout_aab9de3f500547abac65042122ca608a (vhost: /, messages: 0) 2026-02-20 06:11:18.401424 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - barbican.workers_fanout_abe4a0a2f4d94e71b38718e00409ddc1 (vhost: /, messages: 0) 2026-02-20 06:11:18.401464 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-02-20 06:11:18.401475 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - central (vhost: /, messages: 0) 2026-02-20 06:11:18.402010 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-02-20 06:11:18.402113 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-02-20 06:11:18.402132 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-02-20 06:11:18.402150 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - central_fanout_040291074a1d4793803853c13915b99a (vhost: /, messages: 0) 2026-02-20 06:11:18.402170 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - central_fanout_1f1b206f879b4cee94b3a13c3c04e45f (vhost: /, messages: 0) 2026-02-20 06:11:18.402481 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - central_fanout_3573272e57f4463ebcda59f7acea597e (vhost: /, messages: 0) 2026-02-20 06:11:18.402502 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - central_fanout_d4396a6818c44e949aeb2cbcebcfed88 (vhost: /, messages: 0) 2026-02-20 06:11:18.402512 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - central_fanout_da4db0938f0544d488aa622c0639bb03 (vhost: /, messages: 0) 2026-02-20 06:11:18.402522 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - central_fanout_e3b5db81446449b1bfa3e23c4ac58ab6 (vhost: /, messages: 0) 2026-02-20 06:11:18.402757 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-02-20 06:11:18.402777 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-02-20 06:11:18.402787 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-02-20 06:11:18.403635 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-02-20 06:11:18.403662 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-backup_fanout_03f63b431d2547e080f372d60191d4e4 (vhost: /, messages: 0) 2026-02-20 06:11:18.403673 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-backup_fanout_78611ddde3014dfca392c7a213515126 (vhost: /, messages: 0) 2026-02-20 06:11:18.403683 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-backup_fanout_b4339e2c014c439084def171209fb360 (vhost: /, messages: 0) 2026-02-20 06:11:18.403693 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-02-20 06:11:18.403787 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-20 06:11:18.403802 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-20 06:11:18.403818 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-20 06:11:18.404027 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-scheduler_fanout_3705ce63c8f048fb95f924f61c9a98e0 (vhost: /, messages: 0) 2026-02-20 06:11:18.404051 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-scheduler_fanout_38bed645072648dba6ff6fbbbfa785f0 (vhost: /, messages: 0) 2026-02-20 06:11:18.404068 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-scheduler_fanout_c342061487f043a59d8a5de1d3277954 (vhost: /, messages: 0) 2026-02-20 06:11:18.404329 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-02-20 06:11:18.404371 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-02-20 06:11:18.404382 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-02-20 06:11:18.404731 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_72506460c8bb40e0bf237229f4e1cfba (vhost: /, messages: 0) 2026-02-20 06:11:18.404752 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-02-20 06:11:18.404762 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-02-20 06:11:18.404772 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_4c23b34f109a443186e022e24f3511fe (vhost: /, messages: 0) 2026-02-20 06:11:18.405082 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-02-20 06:11:18.405107 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-02-20 06:11:18.405125 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_8924644c6f924629bc602b34a419d57d (vhost: /, messages: 0) 2026-02-20 06:11:18.405141 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-volume_fanout_2b3958cf5efe45d8958f45d4d9711efa (vhost: /, messages: 0) 2026-02-20 06:11:18.405315 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-volume_fanout_572612e042c34fd1a8393579934a7987 (vhost: /, messages: 0) 2026-02-20 06:11:18.405343 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - cinder-volume_fanout_875cf99dfec04852b9d58fbd2c7b8e11 (vhost: /, messages: 0) 2026-02-20 06:11:18.405464 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - compute (vhost: /, messages: 0) 2026-02-20 06:11:18.405481 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-02-20 06:11:18.405496 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-02-20 06:11:18.405507 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-02-20 06:11:18.405516 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - compute_fanout_08437f92048d4a468990d4637365034e (vhost: /, messages: 0) 2026-02-20 06:11:18.405526 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - compute_fanout_44afcef2de9f4a9dba5d612b407da793 (vhost: /, messages: 0) 2026-02-20 06:11:18.405804 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - compute_fanout_73aa96261ec74507a15dd3905cabca77 (vhost: /, messages: 0) 2026-02-20 06:11:18.405822 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - conductor (vhost: /, messages: 0) 2026-02-20 06:11:18.405833 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-02-20 06:11:18.406217 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-02-20 06:11:18.406253 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-02-20 06:11:18.406269 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - conductor_fanout_0ff142e969e849a7a9afcadc3519a8e8 (vhost: /, messages: 0) 2026-02-20 06:11:18.406285 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - conductor_fanout_3f5d6bbd83984fbe99d9acb1a9588b1b (vhost: /, messages: 0) 2026-02-20 06:11:18.406485 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - conductor_fanout_967932770fa04a46ab4c90bd59212888 (vhost: /, messages: 0) 2026-02-20 06:11:18.406533 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - conductor_fanout_9f81b54408d2411887d2fc598d1103fe (vhost: /, messages: 0) 2026-02-20 06:11:18.406551 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - conductor_fanout_b80d6e5500a74f7b8b008c7112d68525 (vhost: /, messages: 0) 2026-02-20 06:11:18.407085 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - conductor_fanout_c8ebbb9982d24fa29ff8987d0c876569 (vhost: /, messages: 0) 2026-02-20 06:11:18.407102 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - event.sample (vhost: /, messages: 10) 2026-02-20 06:11:18.407111 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-02-20 06:11:18.407119 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - magnum-conductor.fudhkl2rwmng (vhost: /, messages: 0) 2026-02-20 06:11:18.407128 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - magnum-conductor.ivfiwn3t4in2 (vhost: /, messages: 0) 2026-02-20 06:11:18.407326 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - magnum-conductor.v7dqh2yozwsk (vhost: /, messages: 0) 2026-02-20 06:11:18.407345 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - magnum-conductor_fanout_0209161795b141768ed9b13429f625fc (vhost: /, messages: 0) 2026-02-20 06:11:18.407358 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - magnum-conductor_fanout_1353407229744deaab6eff838e84afa7 (vhost: /, messages: 0) 2026-02-20 06:11:18.407371 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - magnum-conductor_fanout_24c1e8a43f9648e5abfd86c448a2435b (vhost: /, messages: 0) 2026-02-20 06:11:18.407507 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - magnum-conductor_fanout_3da56366e5264d8ca35e95d6ff2ca4e1 (vhost: /, messages: 0) 2026-02-20 06:11:18.407521 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - magnum-conductor_fanout_7cc4b4e30aab4e0c85d45b825e4fe077 (vhost: /, messages: 0) 2026-02-20 06:11:18.407530 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - magnum-conductor_fanout_80b1dc7fe6cc458c814952c8eb1f5b95 (vhost: /, messages: 0) 2026-02-20 06:11:18.407739 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - magnum-conductor_fanout_b3cba62e2b1a4e6d9049c22ff7dfab82 (vhost: /, messages: 0) 2026-02-20 06:11:18.407755 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - magnum-conductor_fanout_d8ec90db6250489f80cd2e60435762de (vhost: /, messages: 0) 2026-02-20 06:11:18.407763 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - magnum-conductor_fanout_dd25d74c7db94d7c8e05c9dcb9864cb5 (vhost: /, messages: 0) 2026-02-20 06:11:18.407949 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-02-20 06:11:18.407964 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-02-20 06:11:18.408108 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-02-20 06:11:18.408121 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-02-20 06:11:18.408319 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-data_fanout_06693b30e6d54697b562644c6a29232d (vhost: /, messages: 0) 2026-02-20 06:11:18.408332 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-data_fanout_6bbaa47c6111433cb7b170289a1e6956 (vhost: /, messages: 0) 2026-02-20 06:11:18.408341 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-data_fanout_a99bc9c61e424308814761a7905b176d (vhost: /, messages: 0) 2026-02-20 06:11:18.408452 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-02-20 06:11:18.408514 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-20 06:11:18.408538 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-20 06:11:18.408665 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-20 06:11:18.408679 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-scheduler_fanout_393a5baa8b1f414e935f74568b0e4976 (vhost: /, messages: 0) 2026-02-20 06:11:18.409114 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-scheduler_fanout_3c6a78963f3046f4ae7d1fd631c8fb08 (vhost: /, messages: 0) 2026-02-20 06:11:18.409129 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-scheduler_fanout_822b3225d4304dcbb9e31687bd571d25 (vhost: /, messages: 0) 2026-02-20 06:11:18.409146 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-02-20 06:11:18.409155 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-02-20 06:11:18.409290 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-02-20 06:11:18.409304 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-02-20 06:11:18.409312 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-share_fanout_107f300d00f3443397d1a8e6775746de (vhost: /, messages: 0) 2026-02-20 06:11:18.409320 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-share_fanout_4f7d42507c7e47feb9328c972a6aede4 (vhost: /, messages: 0) 2026-02-20 06:11:18.409749 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - manila-share_fanout_68aee7a8e9b648b8820d6197cb5d1867 (vhost: /, messages: 0) 2026-02-20 06:11:18.409783 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-02-20 06:11:18.409794 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-02-20 06:11:18.409804 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-02-20 06:11:18.409940 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-02-20 06:11:18.409954 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-02-20 06:11:18.409965 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-02-20 06:11:18.410117 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-02-20 06:11:18.410137 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-02-20 06:11:18.410400 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-02-20 06:11:18.410415 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-02-20 06:11:18.410422 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-02-20 06:11:18.410429 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - octavia_provisioning_v2_fanout_75e2e7504ff4447f9282ac094dace076 (vhost: /, messages: 0) 2026-02-20 06:11:18.410438 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - octavia_provisioning_v2_fanout_d7adf362850745fd909058c33b9e4111 (vhost: /, messages: 0) 2026-02-20 06:11:18.410622 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - producer (vhost: /, messages: 0) 2026-02-20 06:11:18.410634 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-02-20 06:11:18.410705 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-02-20 06:11:18.410714 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-02-20 06:11:18.410721 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - producer_fanout_070dcf115707426db0d43eb3d8a257c5 (vhost: /, messages: 0) 2026-02-20 06:11:18.410889 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - producer_fanout_11381f79518149be89a3a71b4d861693 (vhost: /, messages: 0) 2026-02-20 06:11:18.410909 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - producer_fanout_5f474a7846ee49528788f93db53a3693 (vhost: /, messages: 0) 2026-02-20 06:11:18.411112 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - producer_fanout_84eb5693acaa4f7d93d34604dd3f4b7f (vhost: /, messages: 0) 2026-02-20 06:11:18.411134 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - producer_fanout_ccc65071caf14738893eb0ce22974a1e (vhost: /, messages: 0) 2026-02-20 06:11:18.411141 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - producer_fanout_ffe936a62bbb47d38610bead1bdf74b2 (vhost: /, messages: 0) 2026-02-20 06:11:18.411361 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-02-20 06:11:18.411374 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-20 06:11:18.411381 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-20 06:11:18.411388 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-20 06:11:18.411499 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-plugin_fanout_1476ff992104448e92d1a4ae735cc521 (vhost: /, messages: 0) 2026-02-20 06:11:18.411511 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-plugin_fanout_286373450eef4daeb15e825cea2525c7 (vhost: /, messages: 0) 2026-02-20 06:11:18.411665 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-plugin_fanout_3334ebc6e4e64315bbabca6cb955568c (vhost: /, messages: 0) 2026-02-20 06:11:18.411675 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-plugin_fanout_39c6edadf4fc4dc9a0032475b4db89ee (vhost: /, messages: 0) 2026-02-20 06:11:18.411814 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-plugin_fanout_3b3b667961114faa992d44de2d14c893 (vhost: /, messages: 0) 2026-02-20 06:11:18.411825 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-plugin_fanout_4db8b3b339e04d4692a5fafd73a91a5a (vhost: /, messages: 0) 2026-02-20 06:11:18.411916 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-plugin_fanout_841889bde60543388b00a97d08ad3493 (vhost: /, messages: 0) 2026-02-20 06:11:18.411926 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-plugin_fanout_a230d511fa4c4b5e99593578cc0cbb27 (vhost: /, messages: 0) 2026-02-20 06:11:18.411933 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-plugin_fanout_c4915b9969fd4590b76eda5dc70815b2 (vhost: /, messages: 0) 2026-02-20 06:11:18.412219 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-02-20 06:11:18.412232 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-20 06:11:18.412385 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-20 06:11:18.412396 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-20 06:11:18.412403 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_133c1354164148f68ac5aa52f3d340fc (vhost: /, messages: 0) 2026-02-20 06:11:18.412410 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_254be7506a1b443db5e8c78fa9d93a36 (vhost: /, messages: 0) 2026-02-20 06:11:18.412628 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_28126414c6cf4abf92cb7a4378d848a3 (vhost: /, messages: 0) 2026-02-20 06:11:18.412641 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_392027c0d4314842bdbbbf80df2f53d2 (vhost: /, messages: 0) 2026-02-20 06:11:18.412648 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_513fcdd01b414f4f80890eab134de168 (vhost: /, messages: 0) 2026-02-20 06:11:18.412876 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_5467130481854167a20f44aa50e1bd7f (vhost: /, messages: 0) 2026-02-20 06:11:18.412887 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_59ad3d0751ee46178da5641be396c699 (vhost: /, messages: 0) 2026-02-20 06:11:18.412989 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_703a1bcc21c14d6d802eabce270fc925 (vhost: /, messages: 0) 2026-02-20 06:11:18.412999 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_88e5bd00c59a4ebbae9b2276558e6ac2 (vhost: /, messages: 0) 2026-02-20 06:11:18.413006 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_8c820098df4048cbae15a4f385bea829 (vhost: /, messages: 0) 2026-02-20 06:11:18.413012 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_8eb2a4d1170f4312b853bd59b01258d0 (vhost: /, messages: 0) 2026-02-20 06:11:18.413343 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_93de430fc0f04b39b3cba744ff5933f8 (vhost: /, messages: 0) 2026-02-20 06:11:18.413354 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_a4ae37d77cd04834b60350a885092600 (vhost: /, messages: 0) 2026-02-20 06:11:18.413360 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_c4f07df2d70142919a3d95bd45319d1f (vhost: /, messages: 0) 2026-02-20 06:11:18.413366 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_c6ddf96741a54ee9aaa4ffb29a53d8a3 (vhost: /, messages: 0) 2026-02-20 06:11:18.413373 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_daad41be20be4c839fb6611d1099a168 (vhost: /, messages: 0) 2026-02-20 06:11:18.413379 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_e39c12c07a5d46a9b9ca17663d681ec5 (vhost: /, messages: 0) 2026-02-20 06:11:18.413437 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-reports-plugin_fanout_fcfceb552f5940eca379f1f5cf778dcc (vhost: /, messages: 0) 2026-02-20 06:11:18.413447 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-02-20 06:11:18.413556 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-02-20 06:11:18.413587 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-02-20 06:11:18.413824 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-02-20 06:11:18.413839 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-server-resource-versions_fanout_0e8a2f0002eb4879a702ec428e8eb53c (vhost: /, messages: 0) 2026-02-20 06:11:18.414007 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-server-resource-versions_fanout_3caf6f57702642f8937c108716234481 (vhost: /, messages: 0) 2026-02-20 06:11:18.414051 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-server-resource-versions_fanout_61d2ee97240543e5a444352aabe430d7 (vhost: /, messages: 0) 2026-02-20 06:11:18.414058 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-server-resource-versions_fanout_9c7a7ccbc33d4c84805306344db2c164 (vhost: /, messages: 0) 2026-02-20 06:11:18.414228 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-server-resource-versions_fanout_afea1de3fad04b30b79a60b51dffcf39 (vhost: /, messages: 0) 2026-02-20 06:11:18.414239 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-server-resource-versions_fanout_b7e301742d1e46ce9f3bccfb88eede49 (vhost: /, messages: 0) 2026-02-20 06:11:18.414373 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-server-resource-versions_fanout_c606ab288e044627b80bba534ef64de5 (vhost: /, messages: 0) 2026-02-20 06:11:18.414383 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-server-resource-versions_fanout_e04c1d9a841b470c90672e58435b957d (vhost: /, messages: 0) 2026-02-20 06:11:18.414389 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - q-server-resource-versions_fanout_e9d5064475204e0ab502ff154e44a94b (vhost: /, messages: 0) 2026-02-20 06:11:18.414396 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_08d426fc536741ef9892bd7baabaeaa7 (vhost: /, messages: 0) 2026-02-20 06:11:18.414402 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_1f7e6d146529473fb4b50c777d930503 (vhost: /, messages: 0) 2026-02-20 06:11:18.414545 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_28039aa62b1e4a8a9fc27e7764755ca5 (vhost: /, messages: 0) 2026-02-20 06:11:18.414554 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_376042bc83e84c45bdcc202ce31b5159 (vhost: /, messages: 0) 2026-02-20 06:11:18.414851 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_3bae9a03519b442f99837d50d4ddcb91 (vhost: /, messages: 0) 2026-02-20 06:11:18.414870 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_58851990377b46d9858793cc25addedd (vhost: /, messages: 0) 2026-02-20 06:11:18.414880 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_5fb00b90fc404205a23654db315954e0 (vhost: /, messages: 0) 2026-02-20 06:11:18.414890 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_921d56e91abc4f0cb898ab61d33fe259 (vhost: /, messages: 0) 2026-02-20 06:11:18.414900 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_93cf0db3c60643bf9a5c32b87015b69c (vhost: /, messages: 0) 2026-02-20 06:11:18.415160 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_96040025cb72497c8e8a2f168cc24c60 (vhost: /, messages: 0) 2026-02-20 06:11:18.415181 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_a4ce056fef1243e59f40a3769dc88b5a (vhost: /, messages: 0) 2026-02-20 06:11:18.415253 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_b837821a455142678b5a72daf8613287 (vhost: /, messages: 0) 2026-02-20 06:11:18.415266 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_bf552b9d5510466b8dab8fe5fdf19e04 (vhost: /, messages: 0) 2026-02-20 06:11:18.415277 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_c9ccaacd503b4d939e58fea03cf576b6 (vhost: /, messages: 0) 2026-02-20 06:11:18.415291 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_d43602ce3fd54091a2a2a51ee7190012 (vhost: /, messages: 0) 2026-02-20 06:11:18.415302 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_e191e3244ad149f6a5750085c66ad848 (vhost: /, messages: 0) 2026-02-20 06:11:18.415486 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_e906b7bb7dec470496a5f6a241b5fa52 (vhost: /, messages: 0) 2026-02-20 06:11:18.415505 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - reply_f0c48c022d8f44cea6c1631178097351 (vhost: /, messages: 0) 2026-02-20 06:11:18.415516 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-02-20 06:11:18.415642 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-20 06:11:18.415655 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-20 06:11:18.415746 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-20 06:11:18.415755 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - scheduler_fanout_353b3a90999844fb9b643875496c7db1 (vhost: /, messages: 0) 2026-02-20 06:11:18.415763 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - scheduler_fanout_50b24343c5164a73888ef9739e81e09b (vhost: /, messages: 0) 2026-02-20 06:11:18.415963 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - scheduler_fanout_badd823876714b668e6bc03193ae24c7 (vhost: /, messages: 0) 2026-02-20 06:11:18.415973 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - scheduler_fanout_d013d79b474e4cbab7f2afad177aa727 (vhost: /, messages: 0) 2026-02-20 06:11:18.415979 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - scheduler_fanout_d92a3faaf44a468dacd48370c1d49cf1 (vhost: /, messages: 0) 2026-02-20 06:11:18.415988 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - scheduler_fanout_f5ecc8042c3148e4bd03568d7c960509 (vhost: /, messages: 0) 2026-02-20 06:11:18.416166 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - worker (vhost: /, messages: 0) 2026-02-20 06:11:18.416176 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-02-20 06:11:18.416183 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-02-20 06:11:18.416189 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-02-20 06:11:18.416262 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - worker_fanout_310219d75f774d95a7a490022e89fe5d (vhost: /, messages: 0) 2026-02-20 06:11:18.416348 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - worker_fanout_4d77b459839b4b55bfd40f7e054c1763 (vhost: /, messages: 0) 2026-02-20 06:11:18.416450 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - worker_fanout_58cea75de8834815b160c4233e1959b6 (vhost: /, messages: 0) 2026-02-20 06:11:18.416459 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - worker_fanout_70d8736776a642e7a9aa991b5e08b36a (vhost: /, messages: 0) 2026-02-20 06:11:18.416638 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - worker_fanout_73a7bc6e994e4f9fb9520d8a571b2ba8 (vhost: /, messages: 0) 2026-02-20 06:11:18.416805 | orchestrator | 2026-02-20 06:11:18 | INFO  |  - worker_fanout_907df8a07cd04bef97e98a337f2f999b (vhost: /, messages: 0) 2026-02-20 06:11:18.600857 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-02-20 06:11:20.307291 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-02-20 06:11:20.307386 | orchestrator | [--no-close-connections] [--quorum] 2026-02-20 06:11:20.307402 | orchestrator | [--vhost VHOST] 2026-02-20 06:11:20.307414 | orchestrator | [{list,delete,prepare,check}] 2026-02-20 06:11:20.307426 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-02-20 06:11:20.307440 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-02-20 06:11:20.992945 | orchestrator | ERROR 2026-02-20 06:11:20.993245 | orchestrator | { 2026-02-20 06:11:20.993331 | orchestrator | "delta": "2:03:44.587239", 2026-02-20 06:11:20.993374 | orchestrator | "end": "2026-02-20 06:11:20.505164", 2026-02-20 06:11:20.993408 | orchestrator | "msg": "non-zero return code", 2026-02-20 06:11:20.993439 | orchestrator | "rc": 2, 2026-02-20 06:11:20.993469 | orchestrator | "start": "2026-02-20 04:07:35.917925" 2026-02-20 06:11:20.993497 | orchestrator | } failure 2026-02-20 06:11:21.244800 | 2026-02-20 06:11:21.244932 | PLAY RECAP 2026-02-20 06:11:21.244989 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-02-20 06:11:21.245014 | 2026-02-20 06:11:21.496029 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-02-20 06:11:21.498546 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-20 06:11:22.274326 | 2026-02-20 06:11:22.274484 | PLAY [Post output play] 2026-02-20 06:11:22.291461 | 2026-02-20 06:11:22.291593 | LOOP [stage-output : Register sources] 2026-02-20 06:11:22.363484 | 2026-02-20 06:11:22.363882 | TASK [stage-output : Check sudo] 2026-02-20 06:11:23.218606 | orchestrator | sudo: a password is required 2026-02-20 06:11:23.404313 | orchestrator | ok: Runtime: 0:00:00.013966 2026-02-20 06:11:23.418205 | 2026-02-20 06:11:23.418357 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-20 06:11:23.457189 | 2026-02-20 06:11:23.457521 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-20 06:11:23.539453 | orchestrator | ok 2026-02-20 06:11:23.548316 | 2026-02-20 06:11:23.548454 | LOOP [stage-output : Ensure target folders exist] 2026-02-20 06:11:24.022204 | orchestrator | ok: "docs" 2026-02-20 06:11:24.022696 | 2026-02-20 06:11:24.291709 | orchestrator | ok: "artifacts" 2026-02-20 06:11:24.555526 | orchestrator | ok: "logs" 2026-02-20 06:11:24.577792 | 2026-02-20 06:11:24.578007 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-20 06:11:24.619252 | 2026-02-20 06:11:24.619552 | TASK [stage-output : Make all log files readable] 2026-02-20 06:11:24.936488 | orchestrator | ok 2026-02-20 06:11:24.947256 | 2026-02-20 06:11:24.947432 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-20 06:11:24.983114 | orchestrator | skipping: Conditional result was False 2026-02-20 06:11:24.998280 | 2026-02-20 06:11:24.998434 | TASK [stage-output : Discover log files for compression] 2026-02-20 06:11:25.023310 | orchestrator | skipping: Conditional result was False 2026-02-20 06:11:25.033484 | 2026-02-20 06:11:25.033630 | LOOP [stage-output : Archive everything from logs] 2026-02-20 06:11:25.082372 | 2026-02-20 06:11:25.082591 | PLAY [Post cleanup play] 2026-02-20 06:11:25.091997 | 2026-02-20 06:11:25.092115 | TASK [Set cloud fact (Zuul deployment)] 2026-02-20 06:11:25.150856 | orchestrator | ok 2026-02-20 06:11:25.162556 | 2026-02-20 06:11:25.162683 | TASK [Set cloud fact (local deployment)] 2026-02-20 06:11:25.187072 | orchestrator | skipping: Conditional result was False 2026-02-20 06:11:25.199007 | 2026-02-20 06:11:25.199159 | TASK [Clean the cloud environment] 2026-02-20 06:11:25.815619 | orchestrator | 2026-02-20 06:11:25 - clean up servers 2026-02-20 06:11:26.575236 | orchestrator | 2026-02-20 06:11:26 - testbed-manager 2026-02-20 06:11:26.660109 | orchestrator | 2026-02-20 06:11:26 - testbed-node-1 2026-02-20 06:11:26.741775 | orchestrator | 2026-02-20 06:11:26 - testbed-node-2 2026-02-20 06:11:26.833038 | orchestrator | 2026-02-20 06:11:26 - testbed-node-5 2026-02-20 06:11:26.922928 | orchestrator | 2026-02-20 06:11:26 - testbed-node-0 2026-02-20 06:11:27.013794 | orchestrator | 2026-02-20 06:11:27 - testbed-node-4 2026-02-20 06:11:27.105449 | orchestrator | 2026-02-20 06:11:27 - testbed-node-3 2026-02-20 06:11:27.193839 | orchestrator | 2026-02-20 06:11:27 - clean up keypairs 2026-02-20 06:11:27.214093 | orchestrator | 2026-02-20 06:11:27 - testbed 2026-02-20 06:11:27.239118 | orchestrator | 2026-02-20 06:11:27 - wait for servers to be gone 2026-02-20 06:11:36.027610 | orchestrator | 2026-02-20 06:11:36 - clean up ports 2026-02-20 06:11:36.209155 | orchestrator | 2026-02-20 06:11:36 - 260e4e87-2983-4dc6-8e58-16d987ab721c 2026-02-20 06:11:36.497218 | orchestrator | 2026-02-20 06:11:36 - 45f94851-f753-4b29-9bfc-ef221dcc0499 2026-02-20 06:11:36.836261 | orchestrator | 2026-02-20 06:11:36 - 49a59431-5189-4d26-ba27-c0613864cfbc 2026-02-20 06:11:37.141944 | orchestrator | 2026-02-20 06:11:37 - 51340504-fcf1-4ea3-ab7c-40777f2a695d 2026-02-20 06:11:37.411848 | orchestrator | 2026-02-20 06:11:37 - 56de1908-bfc7-426c-80de-57152aca1070 2026-02-20 06:11:37.628428 | orchestrator | 2026-02-20 06:11:37 - 971fe8e2-50a4-43d7-9343-3a8f44a83d7c 2026-02-20 06:11:38.031689 | orchestrator | 2026-02-20 06:11:38 - d753ba4d-5bc5-4376-97d9-3983fd0072c1 2026-02-20 06:11:38.268925 | orchestrator | 2026-02-20 06:11:38 - clean up volumes 2026-02-20 06:11:38.409005 | orchestrator | 2026-02-20 06:11:38 - testbed-volume-manager-base 2026-02-20 06:11:38.446164 | orchestrator | 2026-02-20 06:11:38 - testbed-volume-3-node-base 2026-02-20 06:11:38.487841 | orchestrator | 2026-02-20 06:11:38 - testbed-volume-5-node-base 2026-02-20 06:11:38.528843 | orchestrator | 2026-02-20 06:11:38 - testbed-volume-4-node-base 2026-02-20 06:11:38.570000 | orchestrator | 2026-02-20 06:11:38 - testbed-volume-1-node-base 2026-02-20 06:11:38.610991 | orchestrator | 2026-02-20 06:11:38 - testbed-volume-2-node-base 2026-02-20 06:11:38.654131 | orchestrator | 2026-02-20 06:11:38 - testbed-volume-0-node-base 2026-02-20 06:11:38.696882 | orchestrator | 2026-02-20 06:11:38 - testbed-volume-4-node-4 2026-02-20 06:11:38.740846 | orchestrator | 2026-02-20 06:11:38 - testbed-volume-8-node-5 2026-02-20 06:11:38.785640 | orchestrator | 2026-02-20 06:11:38 - testbed-volume-5-node-5 2026-02-20 06:11:38.827092 | orchestrator | 2026-02-20 06:11:38 - testbed-volume-7-node-4 2026-02-20 06:11:38.864472 | orchestrator | 2026-02-20 06:11:38 - testbed-volume-6-node-3 2026-02-20 06:11:38.902988 | orchestrator | 2026-02-20 06:11:38 - testbed-volume-1-node-4 2026-02-20 06:11:38.943138 | orchestrator | 2026-02-20 06:11:38 - testbed-volume-3-node-3 2026-02-20 06:11:38.981703 | orchestrator | 2026-02-20 06:11:38 - testbed-volume-2-node-5 2026-02-20 06:11:39.022913 | orchestrator | 2026-02-20 06:11:39 - testbed-volume-0-node-3 2026-02-20 06:11:39.066897 | orchestrator | 2026-02-20 06:11:39 - disconnect routers 2026-02-20 06:11:39.180652 | orchestrator | 2026-02-20 06:11:39 - testbed 2026-02-20 06:11:40.268461 | orchestrator | 2026-02-20 06:11:40 - clean up subnets 2026-02-20 06:11:40.310008 | orchestrator | 2026-02-20 06:11:40 - subnet-testbed-management 2026-02-20 06:11:40.500387 | orchestrator | 2026-02-20 06:11:40 - clean up networks 2026-02-20 06:11:40.676542 | orchestrator | 2026-02-20 06:11:40 - net-testbed-management 2026-02-20 06:11:40.985900 | orchestrator | 2026-02-20 06:11:40 - clean up security groups 2026-02-20 06:11:41.022273 | orchestrator | 2026-02-20 06:11:41 - testbed-management 2026-02-20 06:11:41.133206 | orchestrator | 2026-02-20 06:11:41 - testbed-node 2026-02-20 06:11:41.237106 | orchestrator | 2026-02-20 06:11:41 - clean up floating ips 2026-02-20 06:11:41.279087 | orchestrator | 2026-02-20 06:11:41 - 81.163.193.191 2026-02-20 06:11:41.656442 | orchestrator | 2026-02-20 06:11:41 - clean up routers 2026-02-20 06:11:41.713805 | orchestrator | 2026-02-20 06:11:41 - testbed 2026-02-20 06:11:43.257008 | orchestrator | ok: Runtime: 0:00:17.509683 2026-02-20 06:11:43.261542 | 2026-02-20 06:11:43.261740 | PLAY RECAP 2026-02-20 06:11:43.261967 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-20 06:11:43.262038 | 2026-02-20 06:11:43.396515 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-20 06:11:43.399175 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-20 06:11:44.124131 | 2026-02-20 06:11:44.124289 | PLAY [Cleanup play] 2026-02-20 06:11:44.140431 | 2026-02-20 06:11:44.140564 | TASK [Set cloud fact (Zuul deployment)] 2026-02-20 06:11:44.200641 | orchestrator | ok 2026-02-20 06:11:44.210654 | 2026-02-20 06:11:44.210870 | TASK [Set cloud fact (local deployment)] 2026-02-20 06:11:44.245631 | orchestrator | skipping: Conditional result was False 2026-02-20 06:11:44.263103 | 2026-02-20 06:11:44.263259 | TASK [Clean the cloud environment] 2026-02-20 06:11:45.383819 | orchestrator | 2026-02-20 06:11:45 - clean up servers 2026-02-20 06:11:45.858287 | orchestrator | 2026-02-20 06:11:45 - clean up keypairs 2026-02-20 06:11:45.878714 | orchestrator | 2026-02-20 06:11:45 - wait for servers to be gone 2026-02-20 06:11:45.925131 | orchestrator | 2026-02-20 06:11:45 - clean up ports 2026-02-20 06:11:46.001696 | orchestrator | 2026-02-20 06:11:46 - clean up volumes 2026-02-20 06:11:46.076138 | orchestrator | 2026-02-20 06:11:46 - disconnect routers 2026-02-20 06:11:46.105810 | orchestrator | 2026-02-20 06:11:46 - clean up subnets 2026-02-20 06:11:46.128916 | orchestrator | 2026-02-20 06:11:46 - clean up networks 2026-02-20 06:11:46.291926 | orchestrator | 2026-02-20 06:11:46 - clean up security groups 2026-02-20 06:11:46.325152 | orchestrator | 2026-02-20 06:11:46 - clean up floating ips 2026-02-20 06:11:46.350085 | orchestrator | 2026-02-20 06:11:46 - clean up routers 2026-02-20 06:11:46.802422 | orchestrator | ok: Runtime: 0:00:01.375301 2026-02-20 06:11:46.806465 | 2026-02-20 06:11:46.806643 | PLAY RECAP 2026-02-20 06:11:46.806778 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-20 06:11:46.806980 | 2026-02-20 06:11:46.933096 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-20 06:11:46.934150 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-20 06:11:47.669402 | 2026-02-20 06:11:47.670268 | PLAY [Base post-fetch] 2026-02-20 06:11:47.686128 | 2026-02-20 06:11:47.686270 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-20 06:11:47.742189 | orchestrator | skipping: Conditional result was False 2026-02-20 06:11:47.757840 | 2026-02-20 06:11:47.758080 | TASK [fetch-output : Set log path for single node] 2026-02-20 06:11:47.806963 | orchestrator | ok 2026-02-20 06:11:47.815796 | 2026-02-20 06:11:47.815977 | LOOP [fetch-output : Ensure local output dirs] 2026-02-20 06:11:48.307865 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/b056cae760f048f69f355ee80d6b87d0/work/logs" 2026-02-20 06:11:48.584140 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/b056cae760f048f69f355ee80d6b87d0/work/artifacts" 2026-02-20 06:11:48.875375 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/b056cae760f048f69f355ee80d6b87d0/work/docs" 2026-02-20 06:11:48.901472 | 2026-02-20 06:11:48.901662 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-20 06:11:49.795478 | orchestrator | changed: .d..t...... ./ 2026-02-20 06:11:49.796327 | orchestrator | changed: All items complete 2026-02-20 06:11:49.796398 | 2026-02-20 06:11:50.528443 | orchestrator | changed: .d..t...... ./ 2026-02-20 06:11:51.208504 | orchestrator | changed: .d..t...... ./ 2026-02-20 06:11:51.238926 | 2026-02-20 06:11:51.239102 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-20 06:11:51.276362 | orchestrator | skipping: Conditional result was False 2026-02-20 06:11:51.279217 | orchestrator | skipping: Conditional result was False 2026-02-20 06:11:51.301738 | 2026-02-20 06:11:51.301847 | PLAY RECAP 2026-02-20 06:11:51.301912 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-20 06:11:51.301940 | 2026-02-20 06:11:51.424021 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-20 06:11:51.426702 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-20 06:11:52.173916 | 2026-02-20 06:11:52.174158 | PLAY [Base post] 2026-02-20 06:11:52.189588 | 2026-02-20 06:11:52.189725 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-20 06:11:53.171133 | orchestrator | changed 2026-02-20 06:11:53.181732 | 2026-02-20 06:11:53.181946 | PLAY RECAP 2026-02-20 06:11:53.182051 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-20 06:11:53.182141 | 2026-02-20 06:11:53.304525 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-20 06:11:53.307080 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-20 06:11:54.122489 | 2026-02-20 06:11:54.122711 | PLAY [Base post-logs] 2026-02-20 06:11:54.133489 | 2026-02-20 06:11:54.133622 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-20 06:11:54.595517 | localhost | changed 2026-02-20 06:11:54.605352 | 2026-02-20 06:11:54.605493 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-20 06:11:54.641232 | localhost | ok 2026-02-20 06:11:54.645266 | 2026-02-20 06:11:54.645388 | TASK [Set zuul-log-path fact] 2026-02-20 06:11:54.672011 | localhost | ok 2026-02-20 06:11:54.686063 | 2026-02-20 06:11:54.686215 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-20 06:11:54.728300 | localhost | ok 2026-02-20 06:11:54.732810 | 2026-02-20 06:11:54.732958 | TASK [upload-logs : Create log directories] 2026-02-20 06:11:55.221420 | localhost | changed 2026-02-20 06:11:55.226065 | 2026-02-20 06:11:55.226243 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-20 06:11:55.758738 | localhost -> localhost | ok: Runtime: 0:00:00.006895 2026-02-20 06:11:55.762885 | 2026-02-20 06:11:55.763007 | TASK [upload-logs : Upload logs to log server] 2026-02-20 06:11:56.335590 | localhost | Output suppressed because no_log was given 2026-02-20 06:11:56.337703 | 2026-02-20 06:11:56.337815 | LOOP [upload-logs : Compress console log and json output] 2026-02-20 06:11:56.419987 | localhost | skipping: Conditional result was False 2026-02-20 06:11:56.437168 | localhost | skipping: Conditional result was False 2026-02-20 06:11:56.450125 | 2026-02-20 06:11:56.450391 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-20 06:11:56.506535 | localhost | skipping: Conditional result was False 2026-02-20 06:11:56.506891 | 2026-02-20 06:11:56.516905 | localhost | skipping: Conditional result was False 2026-02-20 06:11:56.523895 | 2026-02-20 06:11:56.524060 | LOOP [upload-logs : Upload console log and json output]